Display Same Record Multiple time from SQL Query

I would like to share a interview question where you have to display the same records multiple times using SQL query. This is very easy job with the help of PL/SQL block but here we will see, how we can do this using SQL query.

 

First check the sample data and desired output.

[sql]CREATE TABLE TMP (
  Text VARCHAR2(10) ,
  min NUMBER ,
  max NUMBER) ;   
 
INSERT INTO tmp VALUES (‘AAA’, 2,4) ;
INSERT INTO tmp VALUES (‘BBB’, 25,28) ;
INSERT INTO tmp VALUES (‘CCC’, 10,13) ;[/sql]

Now come to solution , if somehow we get the series of number (like 1,2,3,4… Max) as a data set and then we can join this with original table to get desired output.
We can get this data set of number(like 1,2,3…) with the help of “Connect by Level” .
Here LEVEL is a pseudo column which returns 1 for a root row, 2 for a child of a root, and so on. CONNECT BY specifies the relationship between parent rows and child rows of the hierarchy. We use all these in hierarchical query. Will explain hierarchical query in detail in different article but not here.
First check below example to print number 1 to 10 with help of CONNECT BY LEVEL. This is the common question which is asked during interview to print number 1 to 10 from dual.

[sql]SELECT  LEVEL FROM dual CONNECT BY LEVEL <= 10 ;[/sql]

Above query return number 1 to 10.   
We will use this method to solve given problem.

 

Method 1 :

[sql]SELECT  tmp.text || ‘   Value   is   ‘ || b.L  FROM tmp,
       (SELECT  LEVEL L FROM dual
        CONNECT BY level <= (SELECT  max (max) FROM tmp )
        ) b
WHERE   b.L >= min
AND     b.L <= max
ORDER BY 1[/sql]
In above query, line no 2, 3 will gives number series 1 to 28. and I am joining this with original table.

Method 2:    This is same query as above ,just I am writing in different way.

[sql]WITH  cnt AS (
        SELECT  LEVEL L FROM    dual
        CONNECT BY level <=  (SELECT  max (max) FROM tmp)
    )
SELECT  tmp.text || ‘   Value   is   ‘ || cnt.L
FROM    tmp , cnt
WHERE   tmp.min <= cnt.L
AND     tmp.max >= cnt.L
ORDER BY 1[/sql]
Method 3:  
[sql]SELECT text || ‘   Value   is   ‘ || L
FROM (
SELECT distinct tmp.text, level L, tmp.min
from tmp
connect by level <= tmp.max
)
WHERE L >= min
order by text , L?[/sql]
 
Friends, please share your knowledge as well, if you know any other method.

Follow me at : Facebook

DECODE vs CASE in Oracle SQL

DECODE and CASE both provides IF-THEN-ELSE functionality in Oracle SQL. Decode Function and Case Statement is used to transform data values at retrieval time. Before Oracle 8.1 version, only DECODE function was there for providing IF-THEN-ELSE functionality and this can compare only discrete values (Not in range). In Oracle version 8.1.6, Oracle introduced the CASE Statement, which allowed the use of operators like <,> and BETWEEN, IN etc. 

Everything DECODE can do, CASE can. There is a lot else CASE can do though, which DECODE cannot. We’ll go through detailed examples in this article.

     DECODE Function: 

  • It is Extension to ANSI SQL.
  • It is like IF…THEN. . ELSE function.
  • It compares VALUE to IF value , one by one.
  • If equal match found, return corresponding THEN value.
  • If match is not found, the ELSE value is return, if coded.
  • If ELSE is not coded and a match is not found, NULL  is returned.
  • VALUE, IF , THEN and ELSE can be expression (e.g. : SYSDATE –  BIRTHDATE ) or function ( e.g. SIGN(salary – avg_salary) ). 
  • In DECODE, Oracle consider two NULL to be equivalent.
  • Maximum number of item, including VALUE, IF, THEN and ELSE is 255.
     Syntax :
[sql]DECODE( VALUE, IF_1 , THEN_1
               [,IF_2 ,THEN_2] 
               . . .
               [,IF_N ,THEN_N]
               [,ELSE]
      )[/sql]
Above syntax can be understand like below:[sql]DECODE( VALUE, search_1 ,result_1         
             [,search_2 ,result_2] 
             . . .
             [,search_N ,result_N]
             [,default]
      )[/sql]
Example: [sql]SELECT supplier_name,
DECODE(supplier_id, 10000, ‘IBM’,
                    10001, ‘Microsoft’,
                    10002, ‘Hewlett Packard’,
                    ‘Gateway’) result
FROM suppliers;[/sql] Above DECODE statement is equivalent to the following IF-THEN-ELSE statement:
[sql]IF supplier_id = 10000 THEN
   result := ‘IBM’;
ELSIF supplier_id = 10001 THEN
   result := ‘Microsoft’;
ELSIF supplier_id = 10002 THEN
   result := ‘Hewlett Packard’;
ELSE
   result := ‘Gateway’;
END IF;[/sql]

CASE Statement: 

The CASE expression was first added to SQL in Oracle8i. Oracle9i extends its support to PL/SQL to allow CASE to be used as an expression or statement. The CASE expression is a more flexible version of the DECODE function. Oracle support two flavors of CASE, simple and searched.

A)  Simple CASE statement:[sql]CASE expression WHEN this1 THEN that1
                WHEN this2 THEN that2 . . .
                [ ELSE that]
END[/sql] Simple case expression use for an equal condition on the given value or expression.The first WHEN value which is match with given value, return corresponding THEN value. If none of the WHEN value match with given value/expression , the ELSE values is returned. If the ELSE is not coded, NULL is returned.
Example: [sql]SELECT ename, empno,
  (CASE deptno
     WHEN 10 THEN ‘Accounting’
     WHEN 20 THEN ‘Research’
     WHEN 30 THEN ‘Sales’
     WHEN 40 THEN ‘Operations’
     ELSE ‘Unknown’
   END) department
FROM emp
ORDER BY ename;[/sql]?
B)   Searched CASE statement: [sql]CASE 
    WHEN condition_1 THEN return_value1
    WHEN condition_2 THEN return_value2 . . .
        [ ELSE return_value]
END[/sql] The searched CASE allow multiple comparison expression (<, > , <=, >=, BETWEEN, LIKE, IN, IS NULL, etc.). The first TRUE expression cause oracle to return the corresponding THEN value. If none of the WHEN values match the given expression, the ELSE value is returned. If the ELSE is not coded, NULL is returned.
Example:[sql]SELECT 
    CASE WHEN salary BETWEEN 6000 and 8000 THEN ‘6K-8K’ 
    WHEN salary IN (9000,10000) THEN ‘9K-10K’
    WHEN EXISTS (SELECT NULL FROM avg_sal WHERE avg_sal = salary) THEN ‘EXISTS’
    WHEN TO_CHAR(salary) LIKE ‘3%’ THEN ‘Like 3’
    WHEN SALARY IS NULL THEN ‘Null’
    WHEN EMP_NO IN (SELECT mgr_no FROM department) THEN ‘Dept_Mgr’
    ELSE ‘Unknown’
    END Salry_Range
FROM employee, avg_sal;[/sql]
Note: CASE is limited to 128 WHEN/THEN pairs(255 total values). this limitation can be overcome by nesting cASE within CASE.
 

Difference Between DECODE and CASE:

1) CASE can work with logical operators other than ‘=’ :
   DECODE performs an equality check only. CASE is capable of other logical comparisons such as < ,> ,BETWEEN , LIKE etc.[sql]SELECT ename, empno,         
DECODE( deptno ,10 ,’Accounting’
         ,20 ,’Research’
         ,30 ,’Sales’
         ,40 ,’Operations’
              ,’Unknown’
  ) department
FROM emp
ORDER BY ename;[/sql][sql]SELECT ename, empno, 
  (CASE
     WHEN sal < 1000 THEN ‘Low’
     WHEN sal BETWEEN 1000 AND 3000 THEN ‘Medium’
     WHEN sal > 3000 THEN ‘High’
     ELSE ‘N/A’
  END) salary
FROM emp
ORDER BY ename;[/sql]

 

2)  CASE can work with predicates and searchable subqueries:

   DECODE works with expressions that are scalar values only. CASE can work with predicates and subqueries in searchable form. [sql]SELECT  CASE 
        — predicate with “IN”
        WHEN salary IN (9000,10000) THEN ‘9K-10K’
        —-searchable subquery
        WHEN EMP_NO IN (SELECT mgr_no FROM department) THEN ‘Dept_Mgr’
        ELSE ‘Unknown’
        END category
    FROM employee ;[/sql]
3)  CASE can work as a PL/SQL construct but DECODE is used only in SQL statement.
   DECODE can work as a function inside SQL only. CASE can be an efficient substitute for IF-THEN-ELSE in PL/SQL. In below example, if you replace CASE with DECODE then it gives error.
[sql]DECLARE
  deptno     NUMBER := 20;
  dept_desc  VARCHAR2(20);
BEGIN 
  dept_desc := CASE deptno
                 WHEN 10 THEN ‘Accounting’
                 WHEN 20 THEN ‘Research’
                 WHEN 30 THEN ‘Sales’
                 WHEN 40 THEN ‘Operations’
                 ELSE ‘Unknown’
               END;
  DBMS_OUTPUT.PUT_LINE(dept_desc);
END;[/sql]
4) CASE can be used as parameter of a function/procedure.
   CASE can even work as a parameter to a procedure call, while DECODE cannot.
           [sql]exec proc_test(case :a when ‘THREE’ then 3 else 0 end);[/sql]
   Above statement will not give error but below statement gives error.
           [sql]exec proc_test(decode(:a,’THREE’,3,0));[/sql]
5) CASE expects datatype consistency, DECODE does not.
        [sql]SELECT DECODE(200,100,100
                   ,’200′,’200′
                   ,’300′) TEST
FROM dual;
——–Output:

TEST
—-
200[/sql][sql]SELECT CASE 200 WHEN 100 THEN 100
              WHEN ‘200’ THEN ‘200’
              ELSE ‘300’
                END TEST
FROM dual;
————
Error on line 2 at position 14  WHEN ‘200’ THEN ‘200’
ORA-00932: inconsistent datatypes: expected NUMBER got CHAR
[/sql]
6) CASE handles NULL differently :
[sql]SELECT DECODE(NULL,NULL,’This is Null’
                ,’This is Not Null’) TEST
FROM dual;
————–Output:
TEST
—-
This is Null
[/sql][sql]SELECT CASE NULL WHEN NULL THEN ‘This is Null’
          ELSE ‘This is Not Null’
                END TEST
FROM dual;
————–Output:
TEST
—-
This is Not Null
[/sql][sql]SELECT CASE WHEN NULL is NULL THEN ‘This is Null’
         ELSE ‘This is Not Null’
          END TEST
FROM dual;
———-Output:
TEST
—-
This is Null[/sql]

7) CASE complies with ANSI SQL. DECODE is proprietary to Oracle.
8) CASE executes faster in the Optimizer than does DECODE.
9) CASE is a statement while DECODE is a function.

Follow me at : Facebook

Convert Rows into Columns or Transpose Rows to Columns In Oracle SQL

SQL Database LanguageHello Friends, 
Have you ever been in a situation where you as a developer knows that your data is stored in your table rows, but you would like to present the data as a column ??

There are many way to handle this situation. Here I am sharing few of them which I know.?

Below is the sample table and data :
————————————————————
[sql]CREATE TABLE yr_table (
        Roll_No            NUMBER,
        Subject            VARCHAR2(20),
        Marks              NUMBER
);
[/sql]
[sql]insert into yr_table VALUES (1212324,’MTH’,90);
insert into yr_table VALUES (1212324,’PHY’,72);
insert into yr_table VALUES (1212324,’CHE’,85);
insert into yr_table VALUES (1212324,’BIO’,78);
insert into yr_table VALUES (1212334,’MTH’,85);
insert into yr_table VALUES (1212334,’PHY’,65);
insert into yr_table VALUES (1212334,’CHE’,74);
insert into yr_table VALUES (1212672,’MTH’,88);
insert into yr_table VALUES (1212672,’PHY’,42);
insert into yr_table VALUES (1212672,’BIO’,12);
COMMIT;
[/sql]
——————————————————————-

Now I would like to represent above data into below format :

We can achieve above output  in different way.

# Method 1 ->  Using DECODE  function

[sql]SELECT  ROLL_NO,
                 MAX( DECODE( SUBJECT , ‘MTH’ , MARKS ) ) AS MTH  ,
                 MAX( DECODE( SUBJECT , ‘PHY’ , MARKS ) ) AS PHY ,
                 MAX( DECODE( SUBJECT , ‘CHE’ , MARKS ) ) AS CHE ,
                 MAX( DECODE( SUBJECT , ‘BIO’ ,  MARKS ) ) AS BIO
FROM yr_table
GROUP BY ROLL_NO ORDER BY 1  ;?
[/sql]
To understand the above query, first see  the below output of above query without using MAX function then you can understand how above query works.

Method 2 -> Using CASE? statement 
[sql]SELECT    ROLL_NO,
                MAX( CASE WHEN SUBJECT = ‘MTH’ THEN MARKS END) MTH ,
                MAX( CASE WHEN SUBJECT = ‘PHY’ THEN MARKS END) PHY ,
                MAX( CASE WHEN SUBJECT = ‘CHE’ THEN MARKS END) CHE ,
                MAX( CASE WHEN SUBJECT = ‘BIO’ THEN MARKS END) BIO
FROM yr_table
GROUP BY ROLL_NO ORDER BY 1 ;?
[/sql]
Here CASE statement works same as DECODE function.

Method 3 -> Using PIVOT Operator?

The PIVOT and the UNPIVOT operators were introduced in Oracle version 11g. The PIVOT operator takes data in separate rows, aggregates it and converts it into columns. The following query will give the same result as the query above, just by using the PIVOT operator.
[sql]SELECT * FROM yr_table
PIVOT (
MAX ( MARKS )  FOR (SUBJECT) IN (‘MTH’ AS MTH, ‘PHY’ AS PHY, ‘CHE’ AS CHE, ‘BIO’ AS BIO)

ORDER BY 1??
[/sql]
 You can check below link for more clarification on PIVOT Operator.  http://www.oracle.com/technetwork/articles/sql/11g-pivot-097235.html 

Method 4 -> Using WITH clause and PIVOT Operator?   ?
[sql]WITH TMP AS (
    SELECT ROLL_NO,SUBJECT , MARKS FROM yr_table
)
SELECT * FROM TMP 
PIVOT (
MAX(MARKS)  FOR (SUBJECT) IN (‘MTH’ AS MTH, ‘PHY’ AS PHY, ‘CHE’ AS CHE, ‘BIO’ AS BIO)
)
ORDER BY 1
[/sql]

The WITH clause, was added into the Oracle SQL syntax in Oracle 9.2 . The WITH clause can be used to reduce repetition and simplify complex SQL statements. Here don’t get confuse with WITH Clause; just think that it create a temporary table which we can use it in select statement.

Method 5 -> Using WITH clause and Sub-query
[sql]WITH TMP AS (
        SELECT ROLL_NO,SUBJECT , MARKS FROM yr_table
)
SELECT Y.ROLL_NO ,
             (  SELECT TMP.MARKS FROM  TMP WHERE TMP.ROLL_NO = Y.ROLL_NO AND TMP.SUBJECT = ‘MTH’)  AS MTH ,
             (  SELECT TMP.MARKS FROM  TMP WHERE TMP.ROLL_NO = Y.ROLL_NO AND TMP.SUBJECT = ‘PHY’)  AS PHY ,
             (  SELECT TMP.MARKS FROM  TMP WHERE TMP.ROLL_NO = Y.ROLL_NO AND TMP.SUBJECT = ‘CHE’)  AS CHE ,
             (  SELECT TMP.MARKS FROM TMP WHERE  TMP.ROLL_NO = Y.ROLL_NO AND TMP.SUBJECT = ‘BIO’)   AS BIO
FROM (SELECT DISTINCT ROLL_NO FROM yr_table ) Y
ORDER BY 1  ;
[/sql]
Method 6 -> Using Multiple Joins

[sql]SELECT  DISTINCT
              A.ROLL_NO ,    
              B.MARKS AS MTH ,
              C.MARKS AS PHY ,
              D.MARKS AS CHE ,
              E.MARKS AS BIO
FROM yr_table A 
LEFT JOIN yr_table B
    ON  A.ROLL_NO = B.ROLL_NO AND B.SUBJECT = ‘MTH’
LEFT JOIN yr_table C 
    ON A.ROLL_NO = C.ROLL_NO AND C.SUBJECT = ‘PHY’
LEFT JOIN yr_table D
    ON A.ROLL_NO = D.ROLL_NO AND D.SUBJECT = ‘CHE’
LEFT JOIN yr_table E
    ON A.ROLL_NO = E.ROLL_NO AND E.SUBJECT = ‘BIO’
ORDER BY 1;
[/sql]

Friends, please share your knowledge as well, if you know any other method.

Follow me at :
Facebook
Uncategorized

SQL JOIN Explained

Join in SQL  is again important topic in database. The good thing about this topic is that we all know about it but even after that we do mistake in explaining it. We know all the definitions, all types of join then also we are not clear about it . The reason I can say is instead of being application minded we restrict ourselves only to the content. If I think about the audience of this article I can assume that it is definitely going to help college students but its also beneficial to people who are working on database in MNCs.

Whitepaper to troubleshoot SQL Server performance problems

SQL FinetuningHere is the Abstract of my whitepaper on SQL Counterpart to troubleshoot SQL Server proformance which is really helpful for DBA.
 
The primary goal of this paper is to provide important counters in the PERFMON and SQL server Profiler while tackling resource bottlenecks for diagnosing and troubleshooting SQL Server performance problems in common customer scenarios by using publicly available tools. This paper can be used as a reference or guide for database administrators, database developers and for all MS SQL users who are facing performance issues.
You can download this presentation in pdf format from here.


Introduction

Performance is one of the major factors for successful execution of any site or business. So performance tuning becomes one of the major tasks for the database administrators.Many customers can experience an occasional slowdown of their SQL Server database. The reasons can range from a poorly designed database to a system that is improperly configured for the workload. As an administrator, you want to proactively prevent or minimize problems and, when they occur, diagnose the cause and, when possible, take corrective actions to fix the problem. For troubleshooting common performance problems by using publicly available tools such as SQL Server Profiler, System Monitor (Perfmon), and the new Dynamic Management Views in Microsoft SQL Server™ 2005.

 
The primary goal of this paper is to provide useful counters in the PERFMON and SQL server Profiler while handling resource bottlenecks for diagnosing and troubleshooting SQL Server performance problems in common customer scenarios by using publicly available tools.
 
While using tools such as PERFMON and SQL server profiles we use all of the counters which increase the file size (e.g. trace file) and also the unnecessary time needed for the analysis. This brings us to the goal of this paper, which to showcase the useful or necessary counters. Target audience of this article is the database administrators and developers throughout the world, who are facing database performance issues.
 

Scope

This article is not proposing a new software development methodology. It is not promoting any particular software or utility. Instead, the purpose of this article is to provide important counters in the PERFMON and SQL server Profiler while tackling resource bottlenecks for diagnosing and troubleshooting SQL Server performance problems in common customer scenarios by using publicly available tools.
 

Important System monitor (Perfmon) counters and their use

The SQL Statistics object provides counters to monitor compilation and the type of requests that are sent to an instance of SQL Server. You must monitor the number of query compilations and recompilations in conjunction with the number of batches received to find out if the compiles are contributing to high CPU use. Ideally, the ratio of SQL Recompilations/sec to Batch Requests/sec should be very low unless users are submitting ad hoc queries. Batch Requests/sec, SQL Compilations/sec and SQL Recompilations/sec are the key data counters for SQL Server: SQL Statistics.
 
In order to find Memory bottlenecks we can use Memory object (Perfmon) such as Cache Bytes counter for system working set, Pool Non paged Bytes counter for size of unpaged pool and Available Bytes (equivalent of the Available value in Task Manager)
 
I/O bottlenecks can be traced using Physical Disk Object. Avg. Disk Queue Length represents the average number of physical read and writes requests that were queued on the selected physical disk during the sampling period. If your I/O system is overloaded, more read/write operations will be waiting. If your disk queue length frequently exceeds a value of 2 during peak usage of SQL Server, then you might have an I/O bottleneck.
 
Avg. Disk Sec/Read is the average time, in seconds, of a read of data from the disk and Avg. Disk Sec/Write is the average time, in seconds, of a write of data to the disk.
Please refer below table while analyzing the trace file.
 
Avg. Disk Sec/read (value) or Avg. Disk Sec/Write Comment
Less than 10 ms Very Good
Between 10 – 20 ms okay
Between 20 – 50 ms slow, needs attention
Greater than 50 ms Serious I/O bottleneck
 
 
Physical Disk: %Disk Time is the percentage of elapsed time that the selected disk drive was busy servicing read or writes requests. A general guideline is that if this value is greater than 50 per cent, it represents an I/O bottleneck. Avg. Disk Reads/Sec is the rate of read operations on the disk. You need to make sure that this number is less than 85 per cent of the disk capacity. The disk access time increases exponentially beyond 85 per cent capacity and Avg. Disk Writes/Sec is the rate of write operations on the disk. Make sure that this number is less than 85 per cent of the disk capacity. The disk access time increases exponentially beyond 85 per cent capacity.
 

Important SQL profiler counters and their use

If the Perfmon counters indicate a high number of recompiles, the recompiles could be contributing to the high CPU consumed by SQL Server. We would then need to look at the profiler trace to find the stored procedures that were being recompiled. The SQL Server Profiler trace gives us that information along with the reason for the recompilation.
 
We can use the SP:Recompile and SQL:StmtRecompile events to get this information. The SP:Recompile and the SQL:StmtRecompile event classes indicate which stored procedures and statements have been recompiled. When you compile a stored procedure, one event is generated for the stored procedure and one for each statement that is compiled.
 
However, when a stored procedure recompiles, only the statement that caused the recompilation is recompiled (not the entire stored procedure as in SQL Server 2000). Some of the more important data columns for the SP:Recompile event class are listed below. The EventSubClass data column in particular is important for determining the reason for the recompile.
SP:Recompile is triggered once for the procedure or trigger that is recompiled and is not fired for an ad hoc batch that could likely be recompiled. In SQL Server 2005, it is more useful to monitor SQL:StmtRecompiles as this event class is fired when any type of batch, ad hoc, stored procedure, or trigger is recompiled.
 
The key data columns we look at in these events are EventClass, EventSubClass, ObjectID (represents stored procedure that contains this statement), SPID, Start Time, Sql Handle and Text Data.

Profiler

Profiler can run, similar to Performance Monitor, either in a GUI mode, or in an automated manner with outputs to files or databases. Sitting and watching the GUI window is usually referred to as SQL-TV. That may be a good way to spot check issues on a database server, or do some ad hoc troubleshooting, but for real performance monitoring you need to set up an automated process and capture the data for processing later. Profiler collects information on events within SQL Server.
 
Cursors, Database, Errors and Warnings, Locks, Objects, Performance, Scans, Security Audit, Server, Sessions, StoredProcedures, TSQL and Transactions are the broad categories of events used in profiler.Each of these categories has a large number of events within it. Rather than detail all the various options, the following is a minimum set of events for capturing basic TSQL performance.
 
Stored Procedures – RPC:Completed which records the end point of a remote procedure call (RPC). These are the more common events you'll see when an application is running against your database.
 
Stored Procedures – PC:Completed which would be calls against procedures from the system itself, meaning you've logged into SQL Server and you're calling procedures through query analyzer, or the server is calling itself from a SQL Agent process.
 
TSQL: SQL Batch:Completed and these events are registered by TSQL statements running locally against the server which is not the same as a stored procedure call, for example:
 
SELECT * FROM table name. Each of these events can then collect a large number of columns of information, each one may or may not be collected from a given event, depending on the event and column in question and each one may collect different data from the event, again depending on the event and column in question. These columns include but are not limited to which can be analyzed with the help of below table.
 
 
Column Name Usage
TextData In the events listed textdata column represents the text of the stored procedure call, including the parameters used for the individual call, or the SQL batch statement executed.
ApplicationName This may or may not be filled in, depending on the connections string used by the calling application. In order to facilitate trouble shooting and performance tuning, it's worth making it a standard within your organization to require this as part of the connection from any application.
LoginName The NT domain and user name that executed the procedure or SQL batch
CPU This is an actual measure of CPU time, expressed in milliseconds, used by the event in question.
Reads These are the count of read operations against the logical disk made by the event being captured.
Writes Unlike the Reads, this is the physical writes performed by the procedure or SQL batch.
Duration This is the length of time that the event captured took to complete. In SQL Server 2000 this piece of data is in milliseconds. As of SQL Server 2005, this has been changed and is now recorded in microseconds. Keep this in mind if you're comparing the performance of a query between the two servers using Trace events.
SPID The server process ID of the event captured. This can sometimes be used to trace a chain of events.
StartTime This record the start time of the event.
 
 
In short, a great deal of information can be gleaned from Profiler. You may or may not be aware, but in previous versions of SQL Server, running Trace, as Profiler was called previously, against a system could bring the system to its knees before you gathered enough information to get a meaningful set of data. This is no longer true.
 
It is possible to turn on enough events and columns to impact the performance of a system but with a reasonable configuration Profiler will use much less than 1% of system resources. That does not mean that you should load up counters on the GUI and sit back to watch your server. This will add some load and can be easily avoided. Instead, take advantage of the extended stored procedures that are available for creating and managing SQL Traces. These will allow you to gather data and write it out to disk (either a local one or a remote one).
 
This means that you'll have the data in a transportable format that you can import into databases or spreadsheets to explore search and clean to your heart's content. You can also write the results directly to a database, but I've generally found this to be slower, therefore having more impact on the server, than writing to a file. This is supported by recommendations in the BOL as well. Here is a basic script to create a trace for output to a file.
 
In order to further limit the data collected, you may want to add filters to restrict by application or login in order to eliminate noise:
 
EXEC sp_trace_setfilter
      @trace_id, 
      @columnid,
      @logicaloperator,
      @comparisonoperator,
      @value
So, for example to keep any trace events from intruding on the data collection above, we could add below fields.
EXEC sp_trace_setfiter
   @trace_id = @TraceId,
   @columnid = 10, –app name column
   @logicaloperator = 1, — logical "or"
   @comparisonoperator = 0, — equals
   @value = N'SQL Profiler'
 
The output can be loaded into the SQL Server Profiler GUI for browsing or you can run this procedure to import the trace data into a table.
 
SELECT * INTO temp_trc
   FROM fn_trace_gettable('c:\temp\my_trace.trc', DEFAULT);
 

Evaluating Profiler data

 Now that you've collected all this data about your system, what do you do with it? There are many ways in which you can use Profiler data. The simplest approach is to use the data from the Duration column as a measure of how slowly a procedure is performing.
 
After collecting the information and moving it into a table, you can begin writing queries against it. Grouping by stored procedure name, stripping off the parameters from the string, will allow you to use aggregates, average, maximum, and minimum, to outline the poorest performing procedures on the system. From there you can look at their CPU or I/O and query plans in order to identify the tuning opportunities available to you.
 
With the release of SQL Server 2005, one additional piece of functionality was added to Profiler which will radically increase your ability to identify where you have performance issues. You now have the ability to load a performance monitor log file and a profiler trace file into the Profiler. This allows you to identify points of time when the server was under load and see what was occurring within the database at the same time.
 
 SQL Server Performance chart

For Memory bottlenecks we can use SQL Server: Buffer Manager Object and Low Buffer cache hit ratio, Low Page life expectancy, High number of Checkpoint pages/sec and High number Lazy writes/sec counters in Profiler.


Conclusion

 We can say that database administrators, database developers and all MS SQL users who are facing performance issues will be able to save their precious time while using Perfmon and SQL server Profiler. As we know that capturing and analyzing a trace file is tedious and time taking task which can be eased with the help of this paper.
         
Most of the time as soon as we receive a database performance issue we use all the counters irrespective of their usage and importance and so, this paper will provide important counters in the PERFMON and SQL server Profiler while tackling resource bottlenecks. And as we know that performance is one of the major factors for successful execution of any site or business. So performance tuning becomes one of the major tasks for database administrators and this paper will help in reducing their burden.

You can view the same article at slideshare too!

References

Uncategorized

How to Choose the Right Database Management System

What is a database?
 
It is the hub of a business house, the nerve center of the IT system of a company and holds crucial information that is vital to the growth and progress of the organization. It is thanks to a database that transactions ranging in the millions take place on a day to day basis.
Uncategorized

MySQL Our SQL – An Exciting RDBMS

MySQL is worlds most used Relational Database Management System and its reason to be so famous is that it is Open source and holds General Public License. Its Current Developers are Oracle but initial was company named as MySQL AB which was acquired by Sun Microsystems in 2008 and Sun Microsystems were acquired by Oracle in 2010.Its first release was in 1995 and its version for Windows was released in 1998. It’s written in C and C++ language.
Uncategorized

FastLoad in TeraData

FASTLOAD
Fast Load is a utility that can be used to quickly load large amounts of data to an empty table on Teradata. It uses multiple sessions to load data to the Teradata target table. FastLoad divides its job into two phases, both designed for speed. They are typically known as Phase 1 and Phase 2. Sometimes they are referred to Acquisition Phase and Application Phase.

PHASE 1: Acquisition
The primary function of Phase 1 is to transfer data from the host computer to the Access Module Processors (AMPs) as quickly as possible. When the Parsing Engine (PE) receives the INSERT command, it uses one session to parse the SQL just once. The PE is the Teradata software processor responsible for parsing syntax and generating a plan to execute the request. It then opens a Teradata session from the FastLoad client directly to the AMPs. Then by default, one session is created for each AMP.

Uncategorized

White paper on Spool Space in Teradata

White paper on Spool Space in TeraData was presented by Me at Wipro where I work on Teradata presently. So first questions arises:

What is Spool Space in TeraData?
TERADATA Spool Space is unused Perm Space that it used for running queries. Spool Space is used to hold intermediate rows during processing, and to hold the rows in the answer set of a transaction.
TERADATA recommends 20% of the available perm space is allocated for Spool space but various across applications.

What is Hibernate and how it works – a minimal introduction

Hibernate LogoWhat is Hibernate and how it works – a minimal introduction

During my post graduations we had to make a project based on the requirement, at that time I heard about a word hibernate. But at that time I was not so much curious about it. I only know that it is a kind of framework. My friends was used it in their project also but I was not aware about it so I use my own Java coding for all stuff.

But when I joined an organization I came to know that here also we are using hibernate but now the situation was very difficult because I did not know A, B, C… of it. But my senior level was very cooperative and they help me a lot to overcome from the situation. So now I want to share my knowledge with you people so that in future you will not face this problem. So let’s start with the basic question what, when, why and how?

What is Hibernate?

Hibernate is a free, open source Java package that makes it easy to work with relational databases. It provides a query service for Java. Hibernate lets you develop independent classes following common Java idiom – including association, inheritance, polymorphism, composition and the Java collections framework. Hibernate is basically used with J2EE (Java web application).

It is a very popular open source technology which fits well both with Java and .NET technologies. The Hibernate 3.0 core is 68,549 lines of Java code. Hibernate maps the Java classes to the database tables. It also provides the data query and retrieval facilities that significantly reduce the development time. Hibernate can be used in Java Swing applications, Java Servlet-based applications, or J2EE applications using EJB session beans. Hibernate is free software that is distributed under the GNU Lesser General Public License.
Source: wikipedia

When Hibernate?

Hibernate was started in 2001 by Gavin King as an alternative to using EJB2-style entity beans. Its mission was to simply offer better capabilities than EJB2(Enterprise Java beans) by simplifying the complexities and allowing for missing features. Early in 2003, the Hibernate development team began Hibernate2 releases which offered many significant improvements over the first release.

JBoss, Inc. (now part of Red Hat) later hired the lead Hibernate developers and worked with them in supporting Hibernate. On 9th Feb 2012 they have released a stable version of hibernate as Hibernate 4.1 final. It is developed in Java language so it uses JVM as a platform for the execution.

Why Hibernate?

It is the most important question that why we need hibernate into our application? So let’s discuss it in very simpler way. We know that when we develop any application, at that time for storing our data we required a database.

For example we are making a web base application for the Post-Office and here we have to keep track of all the information related to post, money order, courier etc. For the information storage point of view we use a MySQL as a backend database and we have code all things into a Java class like how to add data into database or how to perform operation on that data base. Now when we are going to deliver the product (i.e. our application) suddenly client requirement got changed and now they want ms access as their database because it is freely available and it can be easy to understand.

Can you imagine the situation now? You have a limited amount of time and you have to change your each and every class which is dealing with the database and the worst thing is that now you have to change all 50 files because it is a live application (this number can be 5000 also depend upon the scenario) . To handle this kind of situation hibernate came into picture. Hibernate provide an API (Application programming Interface) with all build in functions like insert (), delete (), update () etc. It also provides persistence architecture for a code (each class is independent). Here we follow a standard hibernate architecture in which we use xml for all the configuration settings.

 

How It Works?

Hibernate does not get in your way nor does it force you to change the way your objects behave. They do not need to implement any interfaces in order to become persist. All you need to do is create an XML "mapping document" telling Hibernate the classes you want to be able to store in a database, and how they relate to the tables and columns in that database, and then you can ask it to fetch data as objects, or store objects as data for you. At runtime, Hibernate reads the mapping document and dynamically builds Java classes to manage the translation between the database and Java program.

Now there is one question related to hibernate which was asked to me during my campus interview but at that time I was not able to answer it. Let’s find the answer

What is dirty checking?

Hibernate automatically detects object state changes in order to synchronize the updated state with the database, this is called dirty checking. An important note here is, Hibernate will compare objects by value, except for Collections, which are compared by identity. For this reason you should return exactly the same collection instance as Hibernate passed to the setter method to prevent unnecessary database updates.

Now I will discuss an example for a simple Java application, if you have basic knowledge in Java and JDBC (Java Database Connectivity), then you will understand it comfortably.

Suppose you have to make an application which will maintain records of the entire employee present in your office. So we will do it in the following way

 

 

Simple Design pattern to follow

Posts pagination