DatabaseIntroductory Transact-SQL

Introductory Transact-SQL content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

This is Chapter 1: Introductory Transact-SQL from the book The Guru’s Guide to Transact-SQL (ISBN: 0-20161-576-2), written by Ken Henderson, published by Addison-Wesley Professional.

© Addison-Wesley. All rights reserved.

Introductory Transact-SQL

The single biggest challenge to learning SQL programming is unlearning procedural programming.—Joe Celko

SQL is the lingua franca of the database world. Most modern DBMSs use some type of SQL dialect as their primary query language, including SQL Server. You can use SQL to create or destroy objects on the database server such as tables and to do things with those objects, such as put data into them or query them for that data. No single vendor owns SQL, and each is free to tailor the language to better satisfy its own customer base. Despite this latitude, there is a multilateral agreement against which each implementation is measured. It’s commonly referred to as the ANSI /ISO SQL standard and is governed by the National Committee on Information Technology Standards (NCITS H2). This standard is actually several standards—each named after the year in which it was adopted. Each standard builds on the ones before it, introducing new features, refining language syntax, and so on. The 1992 version of the standard—commonly referred to as SQL-92—is probably the most popular of these and is definitely the most widely adopted by DBMS vendors. As with other languages, vendor implementations of SQL are rated according to their level of compliance with the ANSI/ISO standard. Most vendors are compliant with at least the entry-level SQL-92 specification, though some go further.

Transact-SQL is Microsoft SQL Server’s implementation of the language. It is largely SQL-92 compliant, so if you’re familiar with another vendor’s flavor of SQL, you’ll probably feel right at home with Transact-SQL. Since helping you to become fluent in Transact-SQL is the primary focus of this book and an important step in becoming a skilled SQL Server practitioner, it’s instructive to begin with a brief tour of language fundamentals.

Much of the difficulty typically associated with learning SQL is due to the way it’s presented in books and courseware. Frequently, the would-be SQL practitioner is forced to run a gauntlet of syntax sinkholes and query quicksand while lugging a ten-volume set on database design and performance and tuning on her back. It’s easy to get disoriented in such a situation, to become inundated with nonessential information—to get bogged down in the details. Add to this the obligatory dose of relational database theory, and the SQL neophyte is ready to leave summer camp early.

As with the rest of this book, this chapter attempts to keep things simple. It takes you through the process of creating tables, adding data to them, and querying those tables, one step at a time. This chapter focuses exclusively on the practical details of getting real work done with SQL—it illuminates the bare necessities of Transact-SQL as quickly and as concisely as possible.

In this chapter, I assume you have little or no prior knowledge of Transact-SQL.If you already have a basic working knowledge of the language, you can safelyskip to the next chapter.

Like most computer languages, Transact-SQL is best learned by experience. The view from the trenches is usually better than the one from the tower.

Choosing a SQL Editor

The first step on the road to Transact-SQL fluency is to pick a SQL entry and editing tool. You’ll use this facility to enter SQL commands, execute them, and view their results. The tool you pick will be your constant companion throughout the rest of this book, so choose wisely.

The Query Analyzer tool that’s included with SQL Server is a respectable SQL entry facility. It’s certainly capable of allowing you to work through the examples in this book. Those familiar with previous versions of SQL Server will remember this tool as ISQL / W. The new version resembles its predecessor in many ways but sports a slightly more modern interface. The name change reflects the fact that the new version is more than a mere SQL entry facility. In addition to basic query entry and execution facilities, it provides a wealth of analysis and tuning info (see Chapter 16, “Transact-SQL Performance Tuning,” for more information).

The first order of business when you start Query Analyzer is to connect to the server, so make sure your server is running. Enter your username and password when prompted (if your server is newly installed, username sa defaults to an empty password) and select your server name. If Query Analyzer and SQL Server are running on the same machine, you can use “.” (a period—with no quotes) or (local) (don’t forget the parentheses) for the server name. The user interface of the tool is self-explanatory: You key T-SQL queries into the top pane of the window and view results in the bottom one.

The databases currently defined on your server are displayed in a combo-box on each window’s toolbar. You can select one from the list to make it the active database for the queries you run in that window. Pressing Ctrl-E, F5, or Alt-X runs your query, while Ctrl-F5 checks it for syntax errors.

Hot Tip
If you execute a query while a selection is active in the edit window, Query Analyzer will execute the selection rather than the entire query. This is handy for executing queries in steps and for quickly executing another command without opening a new window.

One of the features sorely missed in Query Analyzer is the Alt-F1 object help facility. In ISQL/ W, you could select an object name in the edit window and press Alt-F1 to get help on it. For tables and views, this presented an abbreviated sp_help report. It was quite handy and saved many a trip to a new query window merely to list an object’s columns.

If you’re a command-line devotee, you may prefer the OSQL utility to Query Analyzer. OSQL is an ODBC-based command-line utility that ships with SQL Server. Like Query Analyzer, OSQL can be used to enter Transact-SQL statements and stored procedures to execute. Once you’ve entered a query, hit return to drop to a new line, then type GO and hit return again to run it (GO must be leftmost on the line). To exit OSQL, type EXIT and hit return.

OSQL has a wealth of command-line and runtime options that are too lengthy to go into here. See the SQL Books Online for more info.

A third option is to use the Sequin SQL editor included on the CD with this book. Sequin sports many of Query Analyzer’s facilities without abandoning the worthwhile features of its predecessors. It has the advantage of being able to query any server or DBMS for which an ODBC provider exists. This means, for example, that you can query Access, Oracle, and SQL Server using just one tool.

Creating a Database

You might already have a database in which you can create some temporary tables for the purpose of working through the examples in this book. If you don’t, creating one is easy enough. In Transact-SQL, you create databases using the CREATE DATABASE command. The complete syntax can be quite complex, but here’s the simplest form:


Run this command in Query Analyzer to create a scratch database for working through the examples in this book. Behind the scenes, SQL Server creates two operating system files to house the new database: GG_TS.MDF and GG_TS_Log.LDF. Data resides in the first file; transaction log information lives in the second. A database’s transaction log is the area where the server first carries out changes made to the data. Once those changes succeed, they’re applied atomically—in one piece—to the actual data. It’s advantageous for both recoverability and performance to separate user data from transaction log data, so SQL Server defaults to working this way. If you don’t specifically indicate a transaction log location (as in the example above), SQL Server selects one for you (the default location is the data directory that was selected during installation).

Notice that we didn’t specify a size for the database or for either of the files. Our new database is set up so that it automatically expands as data is inserted into it. Again, this is SQL Server’s default mode of operation. This one feature alone—database files that automatically expand as needed—greatly reduces the database administrator’s (DBA’s) workload by alleviating the need to monitor databases constantly to ensure that they don’t run out of space. A full transaction log prevents additional changes to the database, and a full data segment prevents additional data from being inserted.

Creating Tables

Once the database is created, you’re ready to begin adding objects to it. Let’s begin by creating some tables using SQL’s CREATE TABLE statement. To ensure that those tables are created in the new database, be sure to change the current database focus to GG_TS before issuing any of these commands. You can do this two ways: You can execute a USE command—USE GG_TS—in the query edit window prior to executing any other commands, or (assuming you’re using Query Analyzer) you can select the new database from the DB: combo-box on the edit window’s toolbar (select from this list if your new database is not visible at first). The DB: combo-box reflects the currently selected database, so be sure it points to GG_TS before proceeding.

Execute the following command to create the customers table:

USE GG_TS     Change the current database context to GG_TSGOCREATE TABLE customers(CustomerNumber int      NOT NULL,LastName       char(30) NOT NULL,FirstName      char(30) NOT NULL,StreetAddress  char(30) NOT NULL,City           char(20) NOT NULL,State          char(2)  NOT NULL,Zip            char(10) NOT NULL)

Once the customers table is built, create the orders table using similar syntax:

CREATE TABLE orders(OrderNumber    int          NOT NULL,OrderDate      datetime     NOT NULL,CustomerNumber int          NOT NULL,ItemNumber     int          NOT NULL,Amount         numeric(9,2) NOT NULL)

Most SQL concepts can be demonstrated using three or fewer tables, so we’ll create a third table. Create the items table using this command:

CREATE TABLE items(ItemNumber  int          NOT NULL,Description char(30)     NOT NULL,Price       numeric(9,2) NOT NULL)

These commands are fairly self-explanatory. The only element that might look a little strange if you’re new to SQL Server is the NOT NULL specification. The SQL NULL keyword is a special syntax token that’s used to represent unknown or nonexistent values. It is not the same as zero for integers or blanks for character string columns. NULL indicates that a value is not known or completely missing from the column—that it’s not there at all. The difference between NULL and zero is the difference between having a zero account balance and not having an account at all. (See Chapter 3, “Missing Values,” for more information on NULLs.) The NULL/NOT NULL specification is used to control whether a column can store SQL’s NULL token. This is formally referred to as column nullability. It dictates whether the column can be truly empty. So, you could read NULL/NOT NULL as NOT REQUIRED/ REQUIRED, respectively. If a field can’t contain NULL, it can’t be truly empty and is therefore required to have some other value.

Note that you don’t have to specify column nullability when you create a table—SQL Server will supply a default setting if it’s omitted. The rules governing default column nullability go like this:

  • If you explicitly specify either NULL or NOT NULL, it will be used (if valid—see below).
  • If a column is based on a user-defined data type, that data type’s nullability specification is used.
  • If a column has only one nullability option, that option is used. Timestamp columns always require values, and bit columns can require them as well, depending on the server compatibility setting (specified via the sp_dbcmptlevel system stored procedure).
  • If the session setting ANSI_NULL_DFLT_ON is set to true (it defaults to the setting specified in the database), column nullability defaults to true. ANSI SQL specifies that columns are nullable by default. Connecting to SQL Server via ODBC or OLEDB (which is the normal way applications connect) sets ANSI_NULL_DFLT_ON to true by default, though this can be changed in ODBC data sources or by the calling application.
  • If the database setting ANSI null default is set to true (it defaults to false), column nullability is set to true.
  • If none of these conditions specifies an ANSI NULL setting, column nullability defaults to false so that columns don’t allow NULL values.

Inserting Data

Use the Transact-SQL INSERT statement to add data to a table, one row at a time. Let’s explore this by adding some test data to the customers table. Enter the following SQL commands to add three rows to customers:

INSERT INTO customersVALUES(1,'Doe','John','123 Joshua Tree','Plano','TX','75025')INSERT INTO customersVALUES(2,'Doe','Jane','123 Joshua Tree','Plano','TX','75025')INSERT INTO customersVALUES(3,'Citizen','John','57 Riverside','Reo','CA','90120')

Now, add four rows to the orders table using the same syntax:

INSERT INTO ordersVALUES(101,'10/18/90',1,1001,123.45)INSERT INTO ordersVALUES(102,'02/27/92',2,1002,678.90)INSERT INTO ordersVALUES(103,'05/20/95',3,1003,86753.09)INSERT INTO ordersVALUES(104,'11/21/97',1,1002,678.90)

Finally, insert three rows into the items table like so:

INSERT INTO itemsVALUES(1001,'WIDGET A',123.45)INSERT INTO itemsVALUES(1002,'WIDGET B',678.90)INSERT INTO itemsVALUES(1003,'WIDGET C',86753.09)

Notice that none of these INSERTs specifies a list of fields, only a list of values. The INSERT command defaults to inserting a value for all columns in order, though you could have specified a column list for each INSERT using syntax like this:

INSERT INTO items (ItemNumber, Price)VALUES(1001,123.45)

Also note that it’s unnecessary to follow the table’s column order in a column list; however, the order of values you supply must match the order of the column list. Here’s an example:

INSERT INTO items (Price, ItemNumber)VALUES(123.45, 1001)

One final note: The INTO keyword is optional in Transact-SQL. This deviates from the ANSI SQL standard and from most other SQL dialects. The syntax below is equivalent to the previous query:

INSERT items (Price, ItemNumber)VALUES(123.45, 1001)

Updating Data

Most people eventually want to change the data they’ve loaded into a database. The SQL UPDATE command is the means by which this happens. Here’s an example:

UPDATE customersSET Zip='86753-0900'WHERE City='Reo'

Depending on the data, the WHERE clause in this query might limit the UPDATE to a single row or to many rows. You can update all the rows in a table by omitting the WHERE clause:

UPDATE customersSET State='CA'

You can also update a column using columns in the same table, including the column itself, like so:

UPDATE ordersSET Amount=Amount+(Amount*.07)

Transact-SQL provides a nice extension, the SQL UPDATE command, that allows you to update the values in one table with those from another. Here’s an example:

UPDATE oSET Amount=PriceFROM orders o JOIN items i ON (o.ItemNumber=i.ItemNumber)

Deleting Data

The SQL DELETE command is used to remove data from tables. To delete all the rows in a table at once, use this syntax:

DELETE FROM customers

Similarly to INSERT, the FROM keyword is optional. Like UPDATE, DELETE can optionally include a WHERE clause to qualify the rows it removes. Here’s an example:

DELETE FROM customersWHERE LastName<>'Doe'

SQL Server provides a quicker, more brute-force command for quickly emptying a table. It’s similar to the dBASE ZAP command and looks like this:


TRUNCATE TABLE empties a table without logging row deletions in the transaction log. It can’t be used with tables referenced by FOREIGN KEY constraints, and it invalidates the transaction log for the entire database. Once the transaction log has been invalidated, it can’t be backed up until the next full database backup. TRUNCATE TABLE also circumvents the triggers defined on a table, so DELETE triggers don’t fire, even though, technically speaking, rows are being deleted from the table. (See Chapter 4, “DDL Insights,” for more information.)

Querying Data

The SELECT command is used to query tables and views for data. You specify what you want via a SELECT statement, and the server “serves” it to you via a result set—a collection of rows containing the data you requested. SELECT is the Swiss Army knife of basic SQL. It can join tables, retrieve data you request, assign local variables, and even create other tables. It’s a fair guess that you’ll use the SELECT statement more than any other single command in Transact-SQL.

We’ll begin exploring SELECT by listing the contents of the tables you just built. Execute

SELECT * FROM tablename

in Query Analyzer, replacing tablename with the name of each of the three tables. You should find that the CUSTOMER and items tables have three rows each, while orders has four.

SELECT * FROM customers

(Results abridged)

CustomerNumber LastName FirstName StreetAddress-------------- -------- --------- -------------1              Doe      John      123 Joshua Tree2              Doe      Jane      123 Joshua Tree3              Citizen  John      57 RiversideSELECT * FROM ordersIn the following line, CustomerNumber should NOT be broken.
It was done in this case to fit the web page.
CustomerOrderNumber OrderDate Number ItemNumber Amount----------- ----------------------- ------ ---------- --------101 1990-10-18 00:00:00.000 1 1001 123.45102 1992-02-27 00:00:00.000 2 1002 678.90103 1995-05-20 00:00:00.000 3 1003 86753.09104 1997-11-21 00:00:00.000 1 1002 678.90SELECT * FROM itemsItemNumber Description Price---------- ----------- --------1001 WIDGET A 123.451002 WIDGET B 678.901003 WIDGET C 86753.09

Column Lists

SELECT * returns all the columns in a table. To return a subset of a table’s columns, use a comma-delimited field list, like so:

SELECT CustomerNumber, LastName, State FROM customersCustomerNumber LastName State-------------- -------- -----1              Doe      TX2              Doe      TX3              Citizen  CA

A SELECT’s column can include column references, local variables, absolute values, functions, and expressions involving any combinations of these elements.

SELECTing Variables and Expressions

Unlike most SQL dialects, the FROM clause is optional in Transact-SQL when not querying database objects. You can issue SELECT statements that return variables (automatic or local), functions, constants, and computations without using a FROM clause. For example,


returns the system date on the computer hosting SQL Server, and

SELECT CAST(10+1 AS CHAR(2))+'/'+CAST(POWER(2,5)-5                 AS CHAR(2))+'/19'+CAST(30+31                 AS CHAR(2))

returns a simple string. Unlike Oracle and many other DBMSs, SQL Server doesn’t force the inclusion of a FROM clause if it makes no sense to do so. Here’s an example that returns an automatic variable:


And here’s one that returns the current user name:


@@VERSION is an automatic variable that’s predefined by SQL Server and read-only. The SQL Server Books Online now refers to these variables as functions, but they aren’t functions in the true sense of the word—they’re predefined constants or automatic variables (e.g., they can be used as parameters to stored procedures, but true functions cannot). I like variable better than constant because the values they return can change throughout a session—they aren’t really constant, they’re just read-only as far as the user is concerned. You’ll see the term automatic variable used throughout this book.


Functions can be used to modify a column value in transit. Transact-SQL provides a bevy of functions that can be roughly divided into six major groups: string functions, numeric functions, date functions, aggregate function, system functions, and meta-data functions. Here’s a Transact-SQL function in action:

SELECT UPPER(LastName), FirstNameFROM customers               FirstName-------------- ---------DOE            JohnDOE            JaneCITIZEN        John

Here, the UPPER() function is used to uppercase the LastName column as it’s returned in the result set. This affects only the result set—the underlying data is unchanged.

Converting Data Types

Converting data between types is equally simple. You can use either the CAST() or CONVERT() function to convert one data type to another, but CAST() is the SQL-92 P compliant method. Here’s a SELECT that converts the Amount column in the orders table to a character string:

SELECT CAST(Amount AS varchar) FROM orders--------123.45678.9086753.09678.90

Here’s an example that illustrates how to convert a datetime value to a character string using a specific format:

SELECT CONVERT(char(8), GETDATE(),112)--------19690720

This example highlights one situation in which CONVERT() offers superior functionality to CAST(). CONVERT() supports a style parameter (the third argument above) that specifies the exact format to use when converting a datetime value to a character string. You can find the table of supported styles in the Books Online, but styles 102 and 112 are probably the most common.


In the examples throughout this book, you’ll find copious use of the CASE function. CASE has two basic forms. In the simpler form, you specify result values for each member of a series of expressions that are compared to a determinant or key expression, like so:

SELECT CASE sexWHEN 0 THEN 'Unknown'WHEN 1 THEN 'Male'WHEN 2 THEN 'Female'ELSE 'Not applicable'END

In the more complex form, known as a “searched” CASE, you specify individual result values for multiple, possibly distinct, logical expressions, like this:


A searched CASE is similar to an embedded IF…ELSE, with each WHEN performing the function of a new ELSE clause.

Personally, I’ve never liked the CASE syntax. I like the idea of a CASE function, but I find the syntax unwieldy. It behaves like a function in that it can be nested within other expressions, but syntactically, it looks more like a flow-control statement. In some languages, “CASE” is a flow-control keyword that’s analogous to the C/C11 switch statement. In Transact-SQL, CASE is used similarly to an inline or “immediate” IF—it returns a value based on if-then-else logic. Frankly, I think it would make a lot more sense for the syntax to read something like this:

CASE(sex, 0, 'Unknown', 1, 'Male', 2, 'Female', 'Unknown')


CASE(DATEDIFF(dd,RentDueDate,GETDATE())>15, Deposit,DATEDIFF(dd,RentDueDate,GETDATE())>5, DailyPenalty*DATEDIFF(dd,RentDueDate,GETDATE()),0)

This is the way that the Oracle DECODE() function works. It’s more compact and much easier to look at than the cumbersome ANSI CASE syntax.

Aggregate Columns

Aggregate columns consist of special functions that perform some calculation on a set of data. Examples of aggregates include the COUNT(), SUM(), AVG(), MIN(), STDDEV(), VAR(), and MAX() functions. They’re best understood by example. Here’s a command that returns the total number of customer records on file:

SELECT COUNT(*) FROM customers

Here’s one that returns the dollar amount of the largest order on file:

SELECT MAX(Amount) FROM orders

And here’s one that returns the total dollar amount of all orders:

SELECT SUM(Amount) FROM orders

Aggregate functions are often used in tandem with SELECT’s GROUP BY clause (covered below) to produce grouped or partitioned aggregates. They can be employed in other uses as well (e.g., to “hide” normally invalid syntax), as the chapters on statistical computations illustrate.

Filtering Data

You use the SQL WHERE clause to qualify the data a SELECT statement returns. It can also be used to limit the rows affected by an UPDATE or DELETE statement. Here are some queries that use WHERE to filter the data they return:

SELECT UPPER(LastName), FirstNameFROM customersWHERE State='TX'    FirstName--- ---------DOE JohnDOE Jane

The following code restricts the customers returned to those whose address contains the word “Joshua.”

SELECT LastName, FirstName, StreetAddress FROM customersWHERE StreetAddress LIKE '%Joshua%'LastName FirstName StreetAddress-------- --------- ---------------Doe      John      123 Joshua TreeDoe      Jane      123 Joshua Tree

Note the use of “%” as a wildcard. The SQL wildcard % (percent sign) matches zero or more instances of any character, while _ (underscore) matches exactly one.

Here’s a query that returns the orders exceeding $500:

SELECT OrderNumber, OrderDate, AmountFROM ordersWHERE Amount > 500OrderNumber OrderDate               Amount----------- ----------------------- --------102         1992-02-27 00:00:00.000 678.90103         1995-05-20 00:00:00.000 86753.09104         1997-11-21 00:00:00.000 678.90

The following example uses the BETWEEN operator to return orders occurring between October 1990 and May 1995, inclusively. I’ve included the time with the second of the two dates because, without it, the time would default to midnight (SQL Server datetime columns always store both the date and time; an omitted time defaults to midnight), making the query noninclusive. Without specification of the time portion, the query would return only orders placed up through the first millisecond of May 31.

SELECT OrderNumber, OrderDate, Amount FROM ordersWHERE OrderDate BETWEEN '10/01/90' AND '05/31/95 23:59:59.999'OrderNumber OrderDate               Amount----------- ----------------------- --------101         1990-10-18 00:00:00.000 123.45102         1992-02-27 00:00:00.000 678.90103         1995-05-20 00:00:00.000 86753.09


A query that can access all the data it needs in a single table is a pretty rare one. John Donne said “no man is an island,” and, in relational databases, no table is, either. Usually, a query will have to go to two or more tables to find all the information it requires. This is the way of things with relational databases. Data is intentionally spread out to keep it as modular as possible. There are lots of good reasons for this modularization (formally known as normalization) that I won’t go into here, but one of its downsides is that what might be a single conceptual entity (an invoice, for example) is often split into multiple physical entities when constructed in a relational database.

Dealing with this fragmentation is where joins come in. A join consolidates the data in two tables into a single result set. The tables aren’t actually merged; they just appear to be in the rows returned by the query. Multiple joins can consolidate multiple tables—it’s quite common to see joins that are multiple levels deep involving scads of tables.

A join between two tables is established by linking a column or columns in one table with those in another (CROSS JOINs are an exception, but more on them later). The expression used to join the two tables constitutes the join condition or join criterion. When the join is successful, data in the second table is combined with the first to form a composite result set—a set of rows containing data from both tables. In short, the two tables have a baby, albeit an evanescent one.

There are two basic types of joins, inner joins and outer joins. The key difference between them is that outer joins include rows in the result set even when the join condition isn’t met, while an inner join doesn’t. How is this? What data ends up in the result set when the join condition fails? When the join criteria in an outer join aren’t met, columns in the first table are returned normally, but columns from the second table are returned with no value—as NULLs. This is handy for finding missing values and broken links between tables.

There are two families of syntax for constructing joins—legacy and ANSI /ISO SQL-92 compliant. The legacy syntax dates back to SQL Server’s days as a joint venture between Sybase and Microsoft. It’s more succinct than the ANSI syntax and looks like this:

SELECT customers.CustomerNumber, orders.AmountFROM customers, ordersWHERE customers.CustomerNumber=orders.CustomerNumberCustomerNumber Amount-------------- --------1              123.452              678.903              86753.091              678.90

Note the use of the WHERE clause to join the customers and orders tables together. This is an inner join. If an order doesn’t exist for a given customer, that customer is omitted completely from the list. Here’s the ANSI version of the same query:

SELECT customers.CustomerNumber, orders.AmountFROM customers JOIN ordersON (customers.CustomerNumber=orders.CustomerNumber)

This one’s a bit loquacious, but the end result is the same: customers and orders are merged using their respective CustomerNumber columns.

As I mentioned earlier, it’s common for queries to construct multilevel joins. Here’s an example of a multilevel join that uses the legacy syntax:

SELECT customers.CustomerNumber, orders.Amount,       items.DescriptionFROM customers, orders, itemsWHERE customers.CustomerNumber=orders.CustomerNumberAND orders.ItemNumber=items.ItemNumberCustomerNumber Amount   Description-------------- -------- -----------1              123.45   WIDGET A2              678.90   WIDGET B3              86753.09 WIDGET C1              678.90   WIDGET B

This query joins the composite of the customers table and the orders table with the items table. Note that the exact ordering of the WHERE clause is unimportant. In order to allow servers to fully optimize queries, SQL requires that the ordering of the predicates in a WHERE clause must not affect the result set. They must be associative—the query must return the same result regardless of the order in which they’re processed.

As with the two-table join, the ANSI syntax for multitable inner joins is similar to the legacy syntax. Here’s the ANSI syntax for the multitable join above:

SELECT customers.CustomerNumber, orders.Amount, items.DescriptionFROM customers JOIN orders               ON (customers.CustomerNumber=orders.CustomerNumber)JOIN items ON (orders.ItemNumber=items.ItemNumber)

Again, it’s a bit wordier, but it performs the same function.

Outer Joins

Thus far, there hasn’t been a stark contrast between the ANSI and legacy join syntaxes. Though not syntactically identical, they seem to be functionally equivalent.

This all changes with outer joins. The ANSI outer join syntax addresses ambiguities inherent in using the WHERE clause—whose terms are by definition associative—to perform table joins. Here’s an example of the legacy syntax that contains such ambiguities:

-- Bad SQL - Don't runSELECT customers.CustomerNumber, orders.Amount, items.DescriptionFROM customers, orders, itemsWHERE customers.CustomerNumber*=orders.CustomerNumberAND orders.ItemNumber*=items.ItemNumber

Don’t bother trying to run this—SQL Server won’t allow it. Why? Because WHERE clause terms are required to be associative, but these aren’t. If customers and orders are joined first, those rows where a customer exists but has no orders will be impossible to join with the items table since their ItemNumber column will be NULL. On the other hand, if orders and items are joined first, the result set will include ITEM records it likely would have otherwise missed. So the order of the terms in the WHERE clause is significant when constructing multilevel joins using the legacy syntax.

It’s precisely because of this ambiguity—whether the ordering of WHERE clause predicates is significant—that the SQL-92 standard moved join construction to the FROM clause. Here’s the above query rewritten using valid ANSI join syntax:

SELECT customers.CustomerNumber, orders.Amount, items.DescriptionFROM customers LEFT OUTER JOIN orders ON(customers.CustomerNumber=orders.CustomerNumber)LEFT OUTER JOIN items ON (orders.ItemNumber=items.ItemNumber)CustomerNumber Amount   Description-------------- -------- -----------1             123.45    WIDGET A1             678.90    WIDGET B2             678.90    WIDGET B3             86753.09  WIDGET C

Here, the ambiguities are gone, and it’s clear that the query is first supposed to join the customers and orders tables, then join the result with the items table. (Note that the OUTER keyword is optional.)

To understand how this shortcoming in the legacy syntax can affect query results, consider the following query. We’ll set it up initially so that the outer join works as expected:

SELECT customers.CustomerNumber, orders.AmountFROM customers, ordersWHERE customers.CustomerNumber*=orders.CustomerNumberAND orders.Amount>600CustomerNumber Amount-------------- --------1              678.902              678.903              86753.09

Since every row in customers finds a match in orders, the problem isn’t obvious. Now let’s change the query so that there are a few mismatches between the tables, like so:

SELECT customers.CustomerNumber+2, orders.AmountFROM customers, ordersWHERE customers.CustomerNumber+2*=orders.CustomerNumberAND orders.Amount>600

This version simply adds 2 to CustomerNumber to ensure that at least a few of the joins will fail and the columns in orders will be returned as NULLs. Here’s the result set:

CustomerNumber Amount-------------- --------3              86753.094              NULL5              NULL

See the problem? Those last two rows shouldn’t be there. Amount is NULL in those rows (because there are no orders for customers 4 and 5), and whether it exceeds $600 is unknown. The query is supposed to return only those rows whose Amount column is known to exceed $600, but that’s not the case. Here’s the ANSI version of the same query:

SELECT customers.CustomerNumber+2, orders.AmountFROM customers LEFT OUTER JOIN orders ON(customers.CustomerNumber+2=orders.CustomerNumber)WHERE orders.Amount>600CustomerNumber Amount-------------- --------3              86753.09

The SQL-92 syntax correctly omits the rows with a NULL Amount. The reason the legacy query fails here is that the predicates in its WHERE clause are evaluated together. When Amount is checked against the >600 predicate, it has not yet been returned as NULL, so it’s erroneously included in the result set. By the time it’s set to NULL, it’s already in the result set, effectively negating the >600 predicate.

Though the inner join syntax you choose is largely a matter a preference, you should still use the SQL-92 syntax whenever possible. It’s hard enough keeping up with a single way of joining tables, let alone two different ways. And, as we’ve seen, there are some real problems with the legacy outer join syntax. Moreover, Microsoft strongly recommends the use of the ANSI syntax and has publicly stated that the legacy outer join syntax will be dropped in a future release of the product. Jumping on the ANSI /ISO bandwagon also makes sense from another perspective: interoperability. Given the way in which the DBMS world—like the real world—is shrinking, it’s not unusual for an application to communicate with or rely upon more than one vendor’s DBMS. Heterogeneous joins, passthrough queries, and vendor-to-vendor replication are now commonplace. Knowing this, it makes sense to abandon proprietary syntax elements in favor of those that play well with others.

Other Types of Joins

Thus far, we’ve explored only left joins—both inner and outer. There are a few others that are worth mentioning as well. Transact-SQL also supports RIGHT OUTER JOINs, CROSS JOINs, and FULL OUTER JOINs.

A RIGHT OUTER JOIN isn’t really that different from a LEFT OUTER JOIN. In fact, it’s really just a LEFT OUTER JOIN with the tables reversed. It’s very easy to restate a LEFT OUTER JOIN as a RIGHT OUTER JOIN. Here’s the earlier LEFT OUTER JOIN query restated:

SELECT customers.CustomerNumber+2, orders.AmountFROM orders RIGHT OUTER JOIN customers ON(customers.CustomerNumber+2=orders.CustomerNumber)Amount------ --------3      86753.094      NULL5      NULL

A RIGHT JOIN returns the columns in the first table as NULLs when the join condition fails. Since you decide which table is the first table and which one’s the second, whether you use a LEFT JOIN or a RIGHT JOIN is largely a matter a preference.

A CROSS JOIN, by contrast, is an intentional Cartesian product. The size of a Cartesian product is the number of rows in one table multiplied by those in the other. So for two tables with three rows each, their CROSS JOIN or Cartesian product would consist of nine rows. By definition, CROSS JOINs don’t need or support the use of the ON clause that other joins require. Here’s a CROSS JOIN of the customers and orders tables:

SELECT customers.CustomerNumber, orders.AmountFROM orders CROSS JOIN customersCustomerNumber Amount-------------- --------1              123.451              678.901              86753.091              678.902              123.452              678.902              86753.092              678.903              123.453              678.903              86753.093              678.90(12 row(s) affected)

A FULL OUTER JOIN returns rows from both tables regardless of whether the join condition succeeds. When a join column in the first table fails to find a match in the second, the values from the second table are returned as NULL, just as they are with a LEFT OUTER JOIN. When the join column in the second table fails to find a matching value in the first table, columns in the first table are returned as NULL, as they are in a RIGHT OUTER JOIN. You can think of a FULL OUTER JOIN as the combination of a LEFT JOIN and a RIGHT JOIN. Here’s the earlier LEFT OUTER JOIN restated as a FULL OUTER JOIN:

SELECT customers.CustomerNumber+2, orders.AmountFROM customers FULL OUTER JOIN orders ON(customers.CustomerNumber+2=orders.CustomerNumber)Amount------ --------3      86753.094      NULL5      NULLNULL   123.45NULL   678.90NULL   678.90


A SELECT statement that’s enclosed in parentheses and embedded within another query (usually in its WHERE clause) is called a subquery. A subquery is normally used to return a list of items that is then compared against a column in the main query. Here’s an example:

SELECT * FROM customersWHERE CustomerNumber IN (SELECT CustomerNumber FROM orders)

Of course, you could accomplish the same thing with an inner join. In fact, the SQL Server optimizer turns this query into an inner join internally. However, you get the idea—a subquery returns an item or set of items that you may then use to filter a query or return a column value.

Grouping Data

Since SQL is a set-oriented query language, statements that group or summarize data are its bread and butter. In conjunction with aggregate functions, they are the means by which the real work of SQL queries is performed. Developers familiar with DBMS products that lean more toward single-record handling find this peculiar because they are accustomed to working with data one row at a time. Generating summary information by looping through a table is a common technique in older database products—but not in SQL Server. A single SQL statement can perform tasks that used to require an entire COBOL program to complete. This magic is performed using SELECT’s GROUP BY clause and Transact-SQL aggregate functions. Here’s an example:

SELECT customers.CustomerNumber,SUM(orders.Amount) AS TotalOrdersFROM customers JOIN ordersON customers.CustomerNumber=orders.CustomerNumberGROUP BY customers.CustomerNumber

This query returns a list of all customers and the total amount of each customer’s orders.

How do you know which fields to include in the GROUP BY clause? You must include all the items in the SELECT statement’s column list that are not aggregate functions or absolute values. Take the following SELECT statement:

-- Bad SQL - don't do thisSELECT customers.CustomerNumber, customers.LastName,SUM(orders.Amount) AS TotalOrdersFROM customers JOIN ordersON customers.CustomerNumber=orders.CustomerNumberGROUP BY customers.CustomerNumber

This query won’t execute because it’s missing a column in the GROUP BY clause. Instead, it should read:

GROUP BY customers.CustomerNumber, customers.LastName

Note that the addition of the LastName column doesn’t really affect the results since CustomerNumber is a unique key. That is, including LastName as a GROUP BY column won’t cause any additional grouping levels to be produced since there is only one LastName for each CustomerNumber.


The HAVING clause is used to limit the rows returned by a SELECT with GROUP BY. Its relationship to GROUP BY is similar to the relationship between the WHERE clause and the SELECT itself. Like the WHERE clause, it restricts the rows returned by a SELECT statement. Unlike WHERE, it operates on the rows in the result set rather than the rows in the query’s tables. Here’s the previous query modified to include a HAVING clause:

SELECT customers.CustomerNumber,       customers.LastName, SUM(orders.Amount)AS TotalOrdersFROM customers JOIN ordersON customers.CustomerNumber=orders.CustomerNumberGROUP BY customers.CustomerNumber, customers.LastNameHAVING SUM(orders.Amount) > 700CustomerNumber LastName TotalOrders-------------- -------- -----------3              Citizen  86753.091              Doe      802.35

There is often a better way of qualifying a query than by using a HAVING clause. In general, HAVING is less efficient than WHERE because it qualifies the result set after it’s been organized into groups; WHERE does so beforehand. Here’s an example that improperly uses the HAVING clause:

-- Bad SQL - don't do thisSELECT customers.LastName, COUNT(*) AS NumberWithNameFROM customersGROUP BY customers.LastNameHAVING customers.LastName<>'Citizen'

Properly written, this query’s filter criteria should be in its WHERE clause, like so:

SELECT customers.LastName, COUNT(*) AS NumberWithNameFROM customersWHERE customers.LastName<> 'Citizen'GROUP BY customers.LastName

In fact, SQL Server recognizes this type of HAVING misuse and translates HAVING into WHERE during query execution. Regardless of whether SQL Server catches errors like these, it’s always better to write optimal code in the first place.

Ordering Data

The ORDER BY clause is used to order the rows returned by a query. It follows the WHERE and GROUP BY clauses (if they exist) and sorts the result set just prior to returning it. Here’s an example:

SELECT LastName, StateFROM customersORDER BY State

Here’s another example:

SELECT FirstName, LastNameFROM customersORDER BY LastName DESC

Note the use of the DESC keyword to sort the rows in descending order. If not directed otherwise, ORDER BY always sorts in ascending order.

Column Aliases

You might have noticed that some of the earlier queries in this chapter use logical column names for aggregate functions such as COUNT() and SUM(). Labels such as these are known as column aliases and make the query and its result set more readable. As with joins, Transact-SQL provides two separate syntaxes for establishing column aliases: legacy or classical and ANSI standard. In the classical syntax, the column alias immediately precedes the column and the two are separated with an equal sign, like so:


ANSI syntax, by contrast, places a column alias immediately to the right of its corresponding column and optionally separates the two with the AS keyword, like so:




Unlike joins, the column alias syntax you choose won’t affect query result sets. This is largely a matter of preference, though it’s always advisable to use the ANSI syntax when you can if for no other reason than compatibility with other products.

You can use column aliases for any item in a result set, not just aggregate functions. For example, the following example substitutes the column alias LName for the LastName column in the result set:

SELECT customers.LastName AS LName, COUNT(*) AS NumberWithNameFROM customersGROUP BY customers.LastName

Note, however, that you cannot use column aliases in other parts of the query except in the ORDER BY clause. In the WHERE, GROUP BY, and HAVING clauses, you must use the actual column name or value. In addition to supporting column aliases, ORDER BY supports a variation on this in which you can specify a sort column by its ordinal position in the SELECT list, like so:

SELECT FirstName, LastNameFROM customersORDER BY 2

This syntax has been deprecated and is less clear than simply using a column name or alias.

Table Aliases

Similar to column aliases, you can use table aliases to avoid having to refer to a table’s full name. You specify table aliases in the FROM clause of queries. Place the alias to the right of the actual table name (optionally separated with the AS keyword), as illustrated here:

SELECT c.LastName, COUNT(*) AS NumberWithNameFROM customers AS cGROUP BY c.LastName

Notice that the alias can be used in the field list of the SELECT list before it is even syntactically defined. This is possible because a query’s references to database objects are resolved before the query is executed.

Managing Transactions

Transaction management is really outside the scope of introductory T-SQL. Nevertheless, transactions are at the heart of database applications development and a basic understanding of them is key to writing good SQL (see Chapter 14, “Transactions,” for in-depth coverage of transactions).

The term transaction refers to a group of changes to a database. Transactions provide for change atomicity—which means that either all the changes within the group occur or none of them do. SQL Server applications use transactions to ensure data integrity and to avoid leaving the database in an interim state if an operation fails.

The COMMIT command writes a transaction permanently to disk (technically speaking, if nested transactions are present, this is true only of the outermost COMMIT, but that’s an advanced topic). Think of it as a database save command. ROLLBACK, by contrast, throws away the changes a transaction would have made to the database; it functions like a database undo command. Both of these commands affect only the changes made since the last COMMIT; you cannot roll back changes that have already been committed.

Unless the IMPLICIT_TRANSACTIONS session variable has been enabled, you must explicitly start a transaction in order to commit or roll it back. Transactions can be nested, and you can check the current nesting level by querying the @@TRANCOUNT automatic variable, like so:


Here’s an example of some Transact-SQL code that uses transactions to undo changes to the database:

BEGIN TRANDELETE customersGOROLLBACKSELECT * FROM customersCustomerNumber LastName FirstName StreetAddress   City  State Zip-------------- -------- --------- --------------- ----- ----- -----1              Doe      John      123 Joshua Tree Plano TX    750252              Doe      Jane      123 Joshua Tree Plano TX    750253              Citizen  John      57 Riverside    Reo   CA    90120

As you can see, ROLLBACK reverses the row removals carried out by the DELETE statement.

Be sure to match BEGIN TRAN with either COMMIT or ROLLBACK. Orphaned transactions can cause serious performance and management problems on the server.


This concludes Introductory Transact-SQL. You should now be able to create a database, build tables, and populate those tables with data. You should also be familiar with the basic syntax required for querying tables and for making rudimentary changes to them. Be sure you have a good grasp of basic Transact-SQL before proceeding with the rest of the book.

About the Author

Ken Henderson, a nationally recognized consultant and leading DBMS practitioner, consults on high-end client/server projects for such customers as the U.S. Air Force, the U.S. Navy, H&R Block, Travelers Insurance, J.P. Morgan, the CIA, Owens-Corning, and CNA Insurance. He is the author of five previous books on client/server and DBMS development, a frequent magazine contributor to such publications as Software Development Magazine and DBMS Magazine, and a speaker at technical conferences.

Source of this material

This is Chapter 1: Introductory Transact-SQL from the book The Guru’s Guide to Transact-SQL (ISBN: 0-20161-576-2) written by Ken Henderson, published by Addison-Wesley Professional.

To access the full Table of Contents for the book

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories