Testing Levels

Written on 3:09 AM by MURALI KRISHNA

UNIT TESTING :
By knowing the inner logic the program the test what you conduct is Unit Testing. or Testing the piece/unit of a program
Unit testing is a procedure used to validate that a particular module of source code is working properly. The procedure is to write test cases for all functions and methods so that whenever a change causes a regression, it can be quickly identified and fixed.
Benefits:
The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. Unit testing provides a strict, written contract that the piece of code must satisfy. As a result, it affords several benefits.
1) Facilitates Change
2) Simplifies Integration
3) Documentation
4) Separation of Interface from Implementation

Integrated Systems Testing
Integrated System Testing (IST) is a systematic technique for validating the construction of the overall Software structure while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested modules and test the overall Software structure that has been dictated by design. IST can be done either as Top down integration or Bottom up Integration.

System Testing:
System testing is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of Black box testing, and as such, should require no knowledge of the inner design of the code

System testing is actually done to the entire system against the Functional Requirement Specifications (FRS) and/or the System Requirement Specification (SRS). Moreover, the System testing is an investigatory testing phase, where the focus is to have almost a destructive attitude and test not only the design, but also the behavior and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in the software/hardware requirements specifications.Remaining All Testing Models comes under System Testing.

User Acceptance Testing:

User Acceptance Testing (UAT) is performed by Users or on behalf of the users to ensure that the Software functions in accordance with the Business Requirement Document. UAT focuses on the following aspects:

• All functional requirements are satisfied
• All performance requirements are achieved
• Other requirements like transportability, compatibility, error recovery etc. are satisfied
• Acceptance criteria specified by the user is met.

Sample Entry and Exit Criteria for User Acceptance Testing
Entry Criteria
♦ Integration testing sign off was obtained
♦ Business requirements have been met or renegotiated with the Business Sponsor or representative
♦ UAT test scripts are ready for execution
♦ The testing environment is established
♦ Security requirements have been documented and necessary user access obtained

Exit Criteria♦ UAT has been completed and approved by the user community in a transition meeting
♦ Change control is managing requested modifications and enhancements
♦ Business sponsor agrees that known defects do not impact a production release—no remaining defects are rated 3, 2, or 1

Bug Life Cycle

Written on 3:05 AM by MURALI KRISHNA

Bug life cycle
Bug Life Cycle starts with an unintentional software bug/behavior and ends when the assigned developer fixes the bug. A bug when found should be communicated and assigned to a developer that can fix it. Once fixed, the problem area should be re-tested. Also, confirmation should be made to verify if the fix did not create problems elsewhere. In most of the cases, the life cycle gets very complicated and difficult to track making it imperative to have a bug/defect tracking system in place.
Following are the different phases of a Bug Life Cycle:
Open: A bug is in Open state when a tester identifies a problem area
Accepted: The bug is then assigned to a developer for a fix. The developer then accepts if valid.
Not Accepted/Won’t fix: If the developer considers the bug as low level or does not accept it as a bug, thus pushing it into Not Accepted/Won’t fix state.
Such bugs will be assigned to the project manager who will decide if the bug needs a fix. If it needs, then assigns it back to the developer, and if it doesn’t, then assigns it back to the tester who will have to close the bug.
Pending: A bug accepted by the developer may not be fixed immediately. In such cases, it can be put under Pending state.
Fixed: Programmer will fix the bug and resolves it as Fixed.
Close: The fixed bug will be assigned to the tester who will put it in the Close state.
Re-Open: Fixed bugs can be re-opened by the testers in case the fix produces problems elsewhere.

1. New: When bug rises then it is in the New State.
2. Open: Developer Check the Bug Then it is in the Open state
3.Fixed: Developer Fixes the Bug, Then State is fixed
4.Close:Tester Checks again, if bug is not raised then it is Closed State.
5. Reopen: Again It is raised then it is in Reopen State.

Capability Maturity Model [CMM]

Written on 3:03 AM by MURALI KRISHNA

Capability Maturity Model [CMM]
Developed by the software community in 1986 with the leadership from the SEI.
Has become the factor standard for assessing and improving processes related to software development.
Has evolved into a process maturity framework Provides guidance for measuring software process maturity helps establish process improvement programs.
Maturity levels

Initial
Repeatable
Defined
Manageable
Optimizing

Level 1: Initial

Each maturity level decomposes into several key process areas that indicate the areas an organization should focus on to improve its software process.

Level 2 - Repeatable: Key practice areas

Requirements management
Software project planning
Software project tracking & oversight
Software subcontract management
Software quality assurance
Software configuration management

Level 3 - Defined: Key practice areas

Organization process focus
Organization process definition
Training program
Integrated software management
Software product engineering
Inter group coordination
Peer reviews

Level 4 - Manageable: Key practice areas

Quantitative Process Management
Software Quality Management


Testing Projects

Written on 9:36 PM by MURALI KRISHNA

Testing Projects

Warehouse Management Systems ( Manhattan Associates )Client: Client logic, Sainsbury’s
Duration: Aug ’05 – Sep ’05
Role: Test Engineer
Environment: Windows NT/ 2000, Unix, SQL Server 2k, Oracle 10g, Toad
Tools: Silk Test
Project Description:
Warehouse Management, Manhattan Associates' warehouse management system (WMS), is the leading software in the industry. Warehouse Management manages a wide range of processes associated with receiving inventory into the warehouse, tracking it, and shipping it to customers. It also has many functions for managing the work in the warehouse, including wave generation processes.
Warehouse Management can help you achieve total industry compliance — well beyond simple advanced receipt notices and barcode shipping label formats. Warehouse Management manages the day-to-day operations of the warehouse and easily integrates with other business applications, such as order processing systems, which perform high-level order allocation and forecasting and retain shipping histories.
Responsibilities:
Manual Testing
Maintaining Silk Test automation scripts
Database maintenance for Silk test automation
Preparing detailed and comprehensive bug reports

Financial Advisor View – Wealth Management Solutions ( Yodlee )
Client: Wachovia, SEI Investments
Duration: Oct ’04 - Apr ’05
Role: Test Engineer
Environment: Windows 9x/ME/2K/XP, Oracle 10g, Toad, F-Secure SSH client, Resin App Server
Testing Tools: Silk Test, Remedy, and Perforce
Project Description:
Financial Advisor provides financial advisors with online access to their clients (On Center User)’ accounts and eliminates the cumbersome process of clients and advisors sharing this important account information.
Admin Tool - This utility will be provided to the person who manages the service for the institution to fulfill tasks necessary to manage the user base, and report on various data points regarding usage activities.
Client View - To facilitate collaboration and communication between an advisor and their clients, Financial Advisor will provide the Advisor with similar views represented to the client in On Center.
Account Export - Recognizes that most Advisors use a suite of software tools to manage their Clients.
Alerts - Financial Advisor’s alerting functionality allows advisors to monitor multiple accounts across multiple clients
Client Search - Financial Advisor is focused on allowing advisors to efficiently and effectively manage their clients
Responsibilities:
Involved in writing Traceability Matrix.
Test Case Design & Execution
Manually executing Test Cases.
Maintaining Silk Test automation scripts
Performed Functionality testing, Compatibility testing, Accessibility testing & Regression testing.
Reviewed software product documentation
Preparation of Defect & Test Summary Reports

Personal Finance Solutions ( Yodlee )
Client: Bank of America, Fidelity, Wachovia
Duration:
Role: Test Engineer
Environment: Windows 9x/ME/2K/XP, Oracle 10g, Toad, F-Secure SSH client, Resin App Server
Tools: Silk Test, Remedy & Perforce
Project Description:
(Data Aggregation Service for managing an User’s Financial and Non-Financial Accounts)
On Center Suite, a financial web application is a personal finance solution for the masses that automates, simplifies and enhances key personal financial management tasks. It includes a powerful set of tools that leverage aggregated personal account data and patented Auto-Login TM technology to provide a unique and compelling user experience.
Activity Centers available within the On Center Suite include:
Alerts for Financial accounts.
Net Worth Statement
Portfolio Manager
Expense Manager
Bill Reminders
Currency Conversion
Non-Financial Accounts like News, E-mail
Responsibilities:
Involved in writing Traceability Matrix.
Test Case Design & Execution
Test Plan and Test case reviews
Manually executing Test Cases.
Performed Functionality testing, Compatibility testing & Regression testing.
Preparation of Defect & Test Summary Reports

Online blood banking portal www.raktadan. org
Duration:
Role: Test Engineer
Environment: Windows NT/ 2000, HTML, Java Script, IIS 4.0, SQL Server, ASP
Testing Tools: Oak Test Automation Kit
Project Description:
Raktadan is an online blood banking system that provides for the prospective blood donor to register on the portal. The donor may be an Individual, Blood bank, or a Corporate body. Anybody who is in need of blood can put a request for the same on the portal. As soon as the request is made, system does the automatic matching of donors based on blood group and the location and e-mails are sent to all the matching donors. Donors can then respond to the requester either through the portal or directly contact them and do the rest. Any visitor to the portal, without registering himself as a donor, can also respond to a particular request.
Responsibilities:
Involved in Preparation and Execution of Test Cases.
Involved in Functional Testing.
Automating Test cases using Oak Test Automation tool kit
Regression testing of multiple releases & enhancements
Tested GUI functionality and Product functionality.
Documenting the results & Reporting defects.

Title: T&H Planning versions 2.0,2.1,2.2
Client: T& H group, Holland
Duration:
Role: Test Engineer
Environment: Windows 2000 Professional, ASP, HTML and SQL-Server 2000
Project Description:
This application is being developed for a Chartered Accounting Company. This is planning software. This system captures and maintains information of different planning activities that are done by the company in order to meet the commitments made to their clients and for tracking the activities of the staff.
The user interface is in Dutch and provides a challenging opportunity for testing. The activities include annual accounting, handling tax related information, submission to the tax department etc.
It shall support all aspects of planning & maintenance works that user do & shall provide users with the ability to know information on the administrative activities by making it easy to create reports.
It shall provide users with a friendly and easy-to-use, Browser based interface that will facilitate the capture, storage, analysis & reporting on information & hence better management decisions.
Responsibilities:
QA Estimation & Test plan creation
Requirements & Test Case Reviews
Test Case Design, traceability matrix and Test Execution
Bug tracking & escalation
Co-ordination with Development team and delivery manager
Sanity Testing, System testing, Compatibility & Regression Testing

Title: Effort Tracker
Duration:
Role: Test Engineer
Environment: Windows NT, Java, Servlets, MS Access, and IIS
Project Description:
It is a computerized system, developed & implemented to cover the project level effort task-wise.
It is a multi-user, password protected, and web based system with query and reporting capabilities.
The workflow and report generating capabilities (graphical) eliminates need of maintaining hardcopy of vital project metrics and also helps in estimation for future projects.
Responsibilities:
Responsible for Manual testing.
Involved in Test Cases preparation and execution.
Bug Reporting
Involved in documenting and tracking defects.


Title: E-Mitra - Office Work Organizer
Duration:
Environment: Windows NT, Java, Servlets, MS Access, and IIS
Project Description:
E-Mitra is a Menu Driven application that can assist in managing resources of an organization. Various departments in an organization, such as Human Resources Department, Administration/ Maintenance Department, can keep track of their resources, so that optimal utilization of their resources can be achieved. The system is expected to provide benefits like:• Easy to use recording and tracking system for resource management• Easy query/reporting facilities to help in tracking different office activities like tour planning, MPLAD, recommendation letter processing etc

1000 Mail ID of HR & Consultants

Written on 9:35 PM by MURALI KRISHNA

1000 Mail IDs of HR & Consultants

freshers@bplmail. com,
indiajobs@cadence. com,
freshers@cosystems. com ,
freshers@infy. com,
career@integramicro .com ,
freshers@sageindia. co.in ,
recruitment@ blore.tcs. co.in ,
sourcing@tcscal. co.in ,
fresher@sanyo. co.in,
noidahr.rec@ st.com,
resume@quasarinnova tions.com,
careers@ltitlblr. com,
bangaloregr@ amdocs.com ,
delhigr@amdocs. com,
freshers@cranessoft ware.com ,
dsphr@india. ti.com,
unicon@vsnl. com,
gauri@writeme. com,
freshers@induslogic .com,
freshers@inf. com
jobs@enterprisesoft .com
kashmira@soundtek. com
cyber1@vsnl. com
chunnu@satyam. net.in
pancham@pclink. com
jobs@esolvetech. com
adaptive@satyam. net.in
hrd@sybortechnologi es.com
careers@oitlnet. com
recruit@synergyamer ica.com
careers@oitinet. com
dshah@erpw.com
simplex@bgl. vsnl.net. in
careers@oitlnet. com
jobs@esolvetech. com
adaptive@hd1. vsnl.net. in
innsys@vsnl. com
kashmira@soundtek. com
rrecruit@vsnl. com
vkd.bvlinks@ axcess.net. in
rramanna@inteliant. com
jobs@enterprisesoft .com
vandanap@duettech. com
PSIndia@rens. com
renworld@bom5. vsnl.net. in
Want2B@aditi. com
resume.india@ rsys.com
telecom.enkay@ gems.vsnl. net.in
mimi.goodman@ arbitron. com
careers@comcompeten ce.com
amity@singnet. com.sg
alphacon@vsnl. com
vkd.bvlinks@ aworld.net
hr@gracelabs. com
jobs@aplab.com
careers@gnostice. com
sanjeev@netcreative mind.com
lokeshmn@knowsys. net
careers@brain1consu lting.com
HR-India@mcubeit. com. (blore)
careers@citadel- soft.com (blore)
r.ramya@craftsilico n.com (blore)
viirgosolutions@ rediffmail. com
jyothi@dewdrop. co.in
jobs@smart-bridge. com (hyd)
techruit@gmail. com
aoi.careers@ atosorigin. com
jobs@ushafire. com
careers@yasutech. com
manoj@trans- quest.com (blore)--sound s/w technologies
careers.in@capgemin i.com,
careers@teneoris. com,
rati.purwar@ cgl.co.in,
talent.search@ lntinfotech. com,
info@binotech. net,
jobs@humancommerce. com,
jobs@lifetreeindia. com,
jobs@sapientinforma tics.com,
icihrd@comneti. com,
soumya.venkataraman an@iflexsolution s.com,
indiajobs@srgroup. net,
careers.bdc@ qualcomm. com,
pat@zolon.com,
info@teracomgroup. com,
manushka@zerocorpor ation.com ,
careers@winsoftech. com ,
saraswathynew@ ethicsoft. com ,
jobinids@idssoft. com ,
jobs@ispan-tech. com ,
jobs@pixelinfotek. com ,
padmag@deccanetworl d.com ,
swc@scl.co.in ,
jobs@icelerate. com ,
myhr@corejob. com ,
itjobsnjobs@ rediffmail. com ,
Freshers2005@ covansys. com ,
hr_recruitment@ sifycorp. com ,
nkavitha@sriharitec h.com,
alankar@equra. com ,
fresher@dsrc. co.in ,
shariff@delphisoft. com ,
info@teracomgroup. com ,
HR-India@mcubeit. com
tapoleena.dey@ mphasis.com
nagesh@ezrecuit. com ,
hr@vitinfotech. com ,
louis.babu@eurother madel.com ,
balajee.ms@mapleses m.com,
sai_prashanth@ dell.com ,
careers@in.firstape x.com ,
e_recuitment@ scandent. com ,
poornima@speedera. com ,
indiacareers@ efi.com ,
jobs@bsil.com,
careers@igate. com,
gautam@mahindra. com ,
pschoudary@corbus. com,
nvijay@thoutworks. com ,
sanjay@cisctechnolo gy.com ,
hr@vitinfotech. com,
amrisha.singh@ oracle.com ,
gnrit@yahoo. com
bangalore@siriuspro .net
info@vxl.co. in
david@axsellit. com
career.scs@siemens. com
hrd@trigen.com
careers.fresher_ blr@iflexsolutio n.com
jobs@dgbmicro. com
india_eng_recruitin g@google. com
krishna.ka@in. ness.com
vinceraj@ssdi. sharp.co. in
hr.india@tek. com
shilpa@nextindia. com
kann_anq@hotmail. com
meera.m@polaris. com
sanjay@cisctechnolo gy.com
nvijay@thoutworks. com
careers@igate. com
jobs@bsil.com
indiacareers@ efi.com
poornima@speedera. com
e_recruitment@ scandent. com
careers@in.firstape x.com
sai_prashanth@ dell.com
megha.jain@grapecit y.com
preeti.sharma@ grapecity. com
praneeth.kobar@ gmail.com (ENsoft ,hyd)
jobs@ntlindia. com
abhilash@sequoiaind ia.com
bdm@netlife. in
paruldevgan@ hsbc.co.in (java and n/w musT)
shalini@gemini- india.com
hr@niharinfo. com (hyd)
praveen@iquest- consultants. com
careers.in@emerioco rp.com
srinathsap@xsilica. com(hyd)
jobs@nagconsultant. com
nandita_patkar@ cms.com (hyd)
raghavendra@ innovisioncorp. com
nascentsoft@ yahoo.com
fresherjobs@ ntlindia. com
hrd@bsw-soft. com ,
recruitment@ lasonindia. com
batchmate@hclcomnet .co.in
aoi.careers@ atosorigin. com
hrd@bsw-soft. com
marina@cordiant. net
hr.spel@pg.siemens. com
info@mcubeit. com
careers@ispacesoftw are.com
freshers@induslogic .com
careers-blr@ yahoo-inc. com
jobs@tavant. com (blore)
freshers@quinnox. com (blore)
walkin@infotech. kilmist.com (blore)
shruthi@edisofttech nologies. com (blore)
jobs@scripps. com (hyd)
supriyak@mastek. com (blore)
careers@detech. co.in (hyd)
hr-da.spel@pg. siemens.com
jobs@sapientinforma tics.com
jobs@humancommerce. com
info@ematrxindia. com
batty@shikshaplanet .com
jobs@akaerotek. com
neeraj@aztec. soft.net
freshers@bplmail. com
Cadance Design Systems - indiajobs@cadence. com
CG SMI! TH Software (Bangalore) - fresher@cgs. cgsmith.soft. net
CO-SYSTEMS - freshers@cosystems. com
DeDuCo Software Systems(Bangalore) - careers@deduco. com
ERICSSON - freshers.blr@ eci.ericsson. se
NUNTIUS Systems India (P) Ltd - fresher@india. nuntius.com
PHILIPS SOFTWARE (Bangalore) - reena.maria@ philips.com
SAGE Design System (Bangalore) - freshers@sageindia. co.in
SEIMENS Information Systems (Bangalore) - hmsrecruit@sisl. co.in
SONY Bangalore - freshers@sonysard. co.in
r.sureshkumar@ softima.com
careers@ispacesoftw are.com
knack@freshersworld .com (Mumbai)
supriyak@mastek. com
tapoleena.dey@ mphasis.com

SQL Server

Written on 9:30 PM by MURALI KRISHNA

SQL Server
Create Table Statement Tables are the basic structure where data is stored in the database. Given that in most cases, there is no way for the database vendor to know ahead of time what your data storage needs are, chances are that you will need to create tables in the database yourself. Many database tools allow you to create tables without writing SQL, but given that tables are the container of all the data, it is important to include the CREATE TABLE syntax in this tutorial.
Before we dive into the SQL syntax for CREATE TABLE, it is a good idea to understand what goes into a table. Tables are divided into rows and columns. Each row represents one piece of data, and each column can be thought of as representing a component of that piece of data. So, for example, if we have a table for recording customer information, then the columns may include information such as First Name, Last Name, Address, City, Country, Birth Date, and so on. As a result, when we specify a table, we include the column headers and the data types for that particular column.
So what are data types? Typically, data comes in a variety of forms. It could be an integer (such as 1), a real number (such as 0.55), a string (such as 'sql'), a date/time expression (such as '2000-JAN-25 03:22:22 '), or even in binary format. When we specify a table, we need to specify the data type associated with each column (i.e., we will specify that 'First Name' is of type char(50) - meaning it is a string with 50 characters). One thing to note is that different relational databases allow for different data types, so it is wise to consult with a database-specific reference first.
The SQL syntax for CREATE TABLE is
CREATE TABLE "table_name"
("column 1" "data_type_for_ column_1" ,
"column 2" "data_type_for_ column_2" ,
... )
So, if we are to create the customer table specified as above, we would type in
CREATE TABLE customer
(First_Name char(50),
Last_Name char(50),
Address char(50),
City char(50),
Country char(25),
Birth_Date date)
Sometimes, we want to provide a default value for each column. A default value is used when you do not specify a column's value when inserting data into the table. To specify a default value, add "Default [value]" after the data type declaration. In the above example, if we want to default column "Address" to "Unknown" and City to "Mumbai", we would type in
CREATE TABLE customer
(First_Name char(50),
Last_Name char(50),
Address char(50) default 'Unknown',
City char(50) default 'Mumbai',
Country char(25),
Birth_Date date)

You can place constraints to limit the type of data that can go into a table. Such constraints can be specified when the table when the table is first created via the CREATE TABLE statement, or after the table is already created via the ALTER TABLE statement.
Common types of constraints include the following:
- NOT NULL
- UNIQUE
- CHECK
- Primary Key
- Foreign Key
Each is described in detail below.
NOT NULL
By default, a column can hold NULL. If you not want to allow NULL value in a column, you will want to place a constraint on this column specifying that NULL is now not an allowable value.
For example, in the following statement,
CREATE TABLE Customer
(SID integer NOT NULL,
Last_Name varchar (30) NOT NULL,
First_Name varchar(30)) ;
Columns "SID" and "Last_Name" cannot include NULL, while "First_Name" can include NULL.
UNIQUE
The UNIQUE constraint ensures that all values in a column are distinct.
For example, in the following statement,
CREATE TABLE Customer
(SID integer Unique,
Last_Name varchar (30),
First_Name varchar(30)) ;
Column "SID" cannot include duplicate values, while such constraint does not hold for columns "Last_Name" and "First_Name" .
Please note that a column that is specified as a primary key must also be unique. At the same time, a column that's unique may or may not be a primary key.
CHECK
The CHECK constraint ensures that all values in a column satisfy certain conditions.
For example, in the following statement,
CREATE TABLE Customer
(SID integer CHECK (SID > 0),
Last_Name varchar (30),
First_Name varchar(30)) ;
Column "SID" must only include integers greater than 0.
Please note that the CHECK constraint does not get enforced by MySQL at this time.
Primary Key and Foreign Key are discussed in the next two sections.
Primary Key
A primary key is used to uniquely identify each row in a table. It can either be part of the actual record itself , or it can be an artificial field (one that has nothing to do with the actual record). A primary key can consist of one or more fields on a table. When multiple fields are used as a primary key, they are called a composite key.
Primary keys can be specified either when the table is created (using CREATE TABLE) or by changing the existing table structure (using ALTER TABLE).
Below are examples for specifying a primary key when creating a table:
MySQL:
CREATE TABLE Customer
(SID integer,
Last_Name varchar(30),
First_Name varchar(30),
PRIMARY KEY (SID));
Oracle:
CREATE TABLE Customer
(SID integer PRIMARY KEY,
Last_Name varchar(30),
First_Name varchar(30)) ;
SQL Server:
CREATE TABLE Customer
(SID integer PRIMARY KEY,
Last_Name varchar(30),
First_Name varchar(30)) ;
Below are examples for specifying a primary key by altering a table:
MySQL:
ALTER TABLE Customer ADD PRIMARY KEY (SID);
Oracle:
ALTER TABLE Customer ADD PRIMARY KEY (SID);
SQL Server:
ALTER TABLE Customer ADD PRIMARY KEY (SID);
Note: Before using the ALTER TABLE command to add a primary key, you'll need to make sure that the field is defined as 'NOT NULL' -- in other words, NULL cannot be an accepted value for that field.
Foreign Key
A foreign key is a field (or fields) that points to the primary key of another table. The purpose of the foreign key is to ensure referential integrity of the data. In other words, only values that are supposed to appear in the database are permitted.
For example, say we have two tables, a CUSTOMER table that includes all customer data, and an ORDERS table that includes all customer orders. The constraint here is that all orders must be associated with a customer that is already in the CUSTOMER table. In this case, we will place a foreign key on the ORDERS table and have it relate to the primary key of the CUSTOMER table. This way, we can ensure that all orders in the ORDERS table are related to a customer in the CUSTOMER table. In other words, the ORDERS table cannot contain information on a customer that is not in the CUSTOMER table.
The structure of these two tables will be as follows:
Table CUSTOMER
column name characteristic
SID Primary Key
Last_Name
First_Name
Table ORDERS
column name characteristic
Order_ID Primary Key
Order_Date
Customer_SID Foreign Key
Amount
In the above example, the Customer_SID column in the ORDERS table is a foreign key pointing to the SID column in the CUSTOMER table.
Below we show examples of how to specify the foreign key when creating the ORDERS table:
MySQL:
CREATE TABLE ORDERS
(Order_ID integer,
Order_Date date,
Customer_SID integer,
Amount double,
Primary Key (Order_ID),
Foreign Key (Customer_SID) references CUSTOMER(SID) );
Oracle:
CREATE TABLE ORDERS
(Order_ID integer primary key,
Order_Date date,
Customer_SID integer references CUSTOMER(SID) ,
Amount double);
SQL Server:
CREATE TABLE ORDERS
(Order_ID integer primary key,
Order_Date datetime,
Customer_SID integer references CUSTOMER(SID) ,
Amount double);
Below are examples for specifying a foreign key by altering a table. This assumes that the ORDERS table has been created, and the foreign key has not yet been put in:
MySQL:
ALTER TABLE ORDERS
ADD FOREIGN KEY (customer_sid) REFERENCES CUSTOMER(SID) ;
Oracle:
ALTER TABLE ORDERS
ADD (CONSTRAINT fk_orders1) FOREIGN KEY (customer_sid) REFERENCES CUSTOMER(SID) ;
SQL Server:
ALTER TABLE ORDERS
ADD FOREIGN KEY (customer_sid) REFERENCES CUSTOMER(SID) ;s
Create View Statement SQL CREATE VIEW
Views can be considered as virtual tables. Generally speaking, a table has a set of definition, and it physically stores the data. A view also has a set of definitions, which is build on top of table(s) or other view(s), and it does not physically store the data.
The syntax for creating a view is as follows:
CREATE VIEW "VIEW_NAME" AS "SQL Statement"
"SQL Statement" can be any of the SQL statements we have discussed in this tutorial.
Let's use a simple example to illustrate. Say we have the following table:
TABLE Customer
(First_Name char(50),
Last_Name char(50),
Address char(50),
City char(50),
Country char(25),
Birth_Date date)
and we want to create a view called V_Customer that contains only the First_Name, Last_Name, and Country columns from this table, we would type in,
CREATE VIEW V_Customer
AS SELECT First_Name, Last_Name, Country
FROM Customer
Now we have a view called V_Customer with the following structure:
View V_Customer
(First_Name char(50),
Last_Name char(50),
Country char(25))
We can also use a view to apply joins to two tables. In this case, users only see one view rather than two tables, and the SQL statement users need to issue becomes much simpler. Let's say we have the following two tables:
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
Los Angeles $300 Jan-08-1999
Boston $700 Jan-08-1999
Table Geography
region_name store_name
East Boston
East New York
West Los Angeles
West San Diego
and we want to build a view that has sales by region information. We would issue the following SQL statement:
CREATE VIEW V_REGION_SALES
AS SELECT A1.region_name REGION, SUM(A2.Sales) SALES
FROM Geography A1, Store_Information A2
WHERE A1.store_name = A2.store_name
GROUP BY A1.region_name
This gives us a view, V_REGION_SALES, that has been defined to store sales by region records. If we want to find out the content of this view, we type in,
SELECT * FROM V_REGION_SALES
Result:
REGION SALES
East $700
West $2050
Create Index Statement
Indexes help us retrieve data from tables quicker. Let's use an example to illustrate this point: Say we are interested in reading about how to grow peppers in a gardening book. Instead of reading the book from the beginning until we find a section on peppers, it is much quicker for us to go to the index section at the end of the book, locate which pages contain information on peppers, and then go to these pages directly. Going to the index first saves us time and is by far a more efficient method for locating the information we need.
The same principle applies for retrieving data from a database table. Without an index, the database system reads through the entire table (this process is called a 'table scan') to locate the desired information. With the proper index in place, the database system can then first go through the index to find out where to retrieve the data, and then go to these locations directly to get the needed data. This is much faster.
Therefore, it is often desirable to create indexes on tables. An index can cover one or more columns. The general syntax for creating an index is:
CREATE INDEX "INDEX_NAME" ON "TABLE_NAME" (COLUMN_NAME)
Let's assume that we have the following table,
TABLE Customer
(First_Name char(50),
Last_Name char(50),
Address char(50),
City char(50),
Country char(25),
Birth_Date date)
and we want to create an index on the column Last_Name, we would type in,
CREATE INDEX IDX_CUSTOMER_ LAST_NAME
on CUSTOMER (Last_Name)
If we want to create an index on both City and Country, we would type in,
CREATE INDEX IDX_CUSTOMER_ LOCATION
on CUSTOMER (City, Country)
There is no strict rule on how to name an index. The generally accepted method is to place a prefix, such as "IDX_", before an index name to avoid confusion with other database objects. It is also a good idea to provide information on which table and column(s) the index is used on.
Please note that the exact syntax for CREATE INDEX may be different for different databases. You should consult with your database reference manual for the precise syntax.
Alter Table Statement
Once a table is created in the database, there are many occasions where one may wish to change the structure of the table. Typical cases include the following:
- Add a column
- Drop a column
- Change a column name
- Change the data type for a column
Please note that the above is not an exhaustive list. There are other instances where ALTER TABLE is used to change the table structure, such as changing the primary key specification or adding a unique constraint to a column.
The SQL syntax for ALTER TABLE is
ALTER TABLE "table_name"
[alter specification]
[alter specification] is dependent on the type of alteration we wish to perform. For the uses cited above, the [alter specification] statements are:
• Add a column: ADD "column 1" "data type for column 1"
• Drop a column: DROP "column 1"
• Change a column name: CHANGE "old column name" "new column name" "data type for new column name"
• Change the data type for a column: MODIFY "column 1" "new data type"
Let's run through examples for each one of the above, using the "customer" table created in the CREATE TABLE section:
Table customer
Column Name Data Type
First_Name char(50)
Last_Name char(50)
Address char(50)
City char(50)
Country char(25)
Birth_Date date
First, we want to add a column called "Gender" to this table. To do this, we key in:
ALTER table customer add Gender char(1)
Resulting table structure:
Table customer
Column Name Data Type
First_Name char(50)
Last_Name char(50)
Address char(50)
City char(50)
Country char(25)
Birth_Date date
Gender char(1)
Next, we want to rename "Address" to "Addr". To do this, we key in,
ALTER table customer change Address Addr char(50)
Resulting table structure:
Table customer
Column Name Data Type
First_Name char(50)
Last_Name char(50)
Addr char(50)
City char(50)
Country char(25)
Birth_Date date
Gender char(1)
Then, we want to change the data type for "Addr" to char(30). To do this, we key in,
ALTER table customer modify Addr char(30)
Resulting table structure:
Table customer
Column Name Data Type
First_Name char(50)
Last_Name char(50)
Addr char(30)
City char(50)
Country char(25)
Birth_Date date
Gender char(1)
Finally, we want to drop the column "Gender". To do this, we key in,
ALTER table customer drop Gender
Resulting table structure:
Table customer
Column Name Data Type
First_Name char(50)
Last_Name char(50)
Addr char(30)
City char(50)
Country char(25)
Birth_Date date

Drop Table Statement
Sometimes we may decide that we need to get rid of a table in the database for some reason. In fact, it would be problematic if we cannot do so because this could create a maintenance nightmare for the DBA's. Fortunately, SQL allows us to do it, as we can use the DROP TABLE command. The syntax for DROP TABLE is
DROP TABLE "table_name"
So, if we wanted to drop the table called customer that we created in the CREATE TABLE section, we simply type
DROP TABLE customer.
Truncate Table Statement
Sometimes we wish to get rid of all the data in a table. One way of doing this is with DROP TABLE, which we saw in the last section. But what if we wish to simply get rid of the data but not the table itself? For this, we can use the TRUNCATE TABLE command. The syntax for TRUNCATE TABLE is
TRUNCATE TABLE "table_name"
So, if we wanted to truncate the table called customer that we created in SQL CREATE TABLE, we simply type,
TRUNCATE TABLE customer

Insert Into Statement
In the previous sections, we have seen how to retrieve information from tables. But how do these rows of data get into these tables in the first place? This is what this section, covering the INSERT statement, and next section, covering tbe UPDATE statement, are about.
In SQL, there are essentially basically two ways to INSERT data into a table: One is to insert it one row at a time, the other is to insert multiple rows at a time. Let's first look at how we may INSERT data one row at a time:
The syntax for inserting data into a table one row at a time is as follows:
INSERT INTO "table_name" ("column1", "column2", ...)
VALUES ("value1", "value2", ...)
Assuming that we have a table that has the following structure,
Table Store_Information
Column Name Data Type
store_name char(50)
Sales float
Date datetime
and now we wish to insert one additional row into the table representing the sales data for Los Angeles on January 10, 1999 . On that day, this store had $900 in sales. We will hence use the following SQL script:
INSERT INTO Store_Information (store_name, Sales, Date)
VALUES (' Los Angeles ', 900, 'Jan-10-1999' )
The second type of INSERT INTO allows us to insert multiple rows into a table. Unlike the previous example, where we insert a single row by specifying its values for all columns, we now use a SELECT statement to specify the data that we want to insert into the table. If you are thinking whether this means that you are using information from another table, you are correct. The syntax is as follows:
INSERT INTO "table1" ("column1", "column2", ...)
SELECT "column3", "column4", ...
FROM "table2"
Note that this is the simplest form. The entire statement can easily contain WHERE, GROUP BY, and HAVING clauses, as well as table joins and aliases.
So for example, if we wish to have a table, Store_Information, that collects the sales information for year 1998, and you already know that the source data resides in the Sales_Information table, we'll type in:
INSERT INTO Store_Information (store_name, Sales, Date)
SELECT store_name, Sales, Date
FROM Sales_Information
WHERE Year(Date) = 1998
Here I have used the SQL Server syntax to extract the year information out of a date. Other relational databases will have different syntax. For example, in Oracle, you will use to_char(date, 'yyyy')=1998.
Update Statement
Once there's data in the table, we might find that there is a need to modify the data. To do so, we can use the UPDATE command. The syntax for this is
UPDATE "table_name"
SET "column_1" = [new value]
WHERE {condition}
For example, say we currently have a table as below:
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
Los Angeles $300 Jan-08-1999
Boston $700 Jan-08-1999
and we notice that the sales for Los Angeles on 01/08/1999 is actually $500 instead of $300, and that particular entry needs to be updated. To do so, we use the following SQL:
UPDATE Store_Information
SET Sales = 500
WHERE store_name = " Los Angeles "
AND Date = "Jan-08-1999"
The resulting table would look like
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
Los Angeles $500 Jan-08-1999
Boston $700 Jan-08-1999
In this case, there is only one row that satisfies the condition in the WHERE clause. If there are multiple rows that satisfy the condition, all of them will be modified.
It is also possible to UPDATE multiple columns at the same time. The syntax in this case would look like the following:
UPDATE "table_name"
SET column_1 = [value1], column_2 = [value2]
WHERE {condition}
Delete From Statement
Sometimes we may wish to get rid of records from a table. To do so, we can use the DELETE FROM command. The syntax for this is
DELETE FROM "table_name"
WHERE {condition}
It is easiest to use an example. Say we currently have a table as below:
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
Los Angeles $300 Jan-08-1999
Boston $700 Jan-08-1999
and we decide not to keep any information on Los Angeles in this table. To accomplish this, we type the following SQL:
DELETE FROM Store_Information
WHERE store_name = " Los Angeles "
Now the content of table would look like,
Table Store_Information
store_name Sales Date
San Diego $250 Jan-07-1999
Boston $700 Jan-08-1999
Select
What do we use SQL commands for? A common use is to select data from the tables located in a database. Immediately, we see two keywords: we need to SELECT information FROM a table. (Note that a table is a container that resides in the database where the data is stored. For more information about how to manipulate tables, go to the Table Manipulation Section). Hence we have the most basic SQL structure:
SELECT "column_name" FROM "table_name"
To illustrate the above example, assume that we have the following table:
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
Los Angeles $300 Jan-08-1999
Boston $700 Jan-08-1999
We shall use this table as an example throughout the tutorial (this table will appear in all sections). To select all the stores in this table, we key in,
SELECT store_name FROM Store_Information
Result:
store_name
Los Angeles
San Diego
Los Angeles
Boston
Multiple column names can be selected, as well as multiple table names.
Distinct
The SELECT keyword allows us to grab all information from a column (or columns) on a table. This, of course, necessarily mean that there will be redundancies. What if we only want to select each DISTINCT element? This is easy to accomplish in SQL. All we need to do is to add DISTINCT after SELECT. The syntax is as follows:
SELECT DISTINCT "column_name"
FROM "table_name"
For example, to select all distinct stores in Table Store_Information,
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
Los Angeles $300 Jan-08-1999
Boston $700 Jan-08-1999

we key in,
SELECT DISTINCT store_name FROM Store_Information
Result:
store_name
Los Angeles
San Diego
Boston
Where
Next, we might want to conditionally select the data from a table. For example, we may want to only retrieve stores with sales above $1,000. To do this, we use the WHERE keyword. The syntax is as follows:
SELECT "column_name"
FROM "table_name"
WHERE "condition"
For example, to select all stores with sales above $1,000 in Table Store_Information,
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
Los Angeles $300 Jan-08-1999
Boston $700 Jan-08-1999

we key in,
SELECT store_name
FROM Store_Information
WHERE Sales > 1000

Result:
store_name
Los Angeles
And Or
In the previous section, we have seen that the WHERE keyword can be used to conditionally select data from a table. This condition can be a simple condition (like the one presented in the previous section), or it can be a compound condition. Compound conditions are made up of multiple simple conditions connected by AND or OR. There is no limit to the number of simple conditions that can be present in a single SQL statement.
The syntax for a compound condition is as follows:
SELECT "column_name"
FROM "table_name"
WHERE "simple condition"
{[AND|OR] "simple condition"}+
The {}+ means that the expression inside the bracket will occur one or more times. Note that AND and OR can be used interchangeably. In addition, we may use the parenthesis sign () to indicate the order of the condition.
For example, we may wish to select all stores with sales greater than $1,000 or all stores with sales less than $500 but greater than $275 in Table Store_Information,
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
San Francisco $300 Jan-08-1999
Boston $700 Jan-08-1999

we key in,
SELECT store_name
FROM Store_Information
WHERE Sales > 1000
OR (Sales < 500 AND Sales > 275)

Result:
store_name
Los Angeles
San Francisco
In
In SQL, there are two uses of the IN keyword, and this section introduces the one that is related to the WHERE clause. When used in this context, we know exactly the value of the returned values we want to see for at least one of the columns. The syntax for using the IN keyword is as follows:
SELECT "column_name"
FROM "table_name"
WHERE "column_name" IN ('value1', 'value2', ...)
The number of values in the parenthesis can be one or more, with each values separated by comma. Values can be numerical or characters. If there is only one value inside the parenthesis, this commend is equivalent to
WHERE "column_name" = 'value1'
For example, we may wish to select all records for the Los Angeles and the San Diego stores in Table Store_Information,
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
San Francisco $300 Jan-08-1999
Boston $700 Jan-08-1999

we key in,
SELECT *
FROM Store_Information
WHERE store_name IN (' Los Angeles ', ' San Diego ')

Result:
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
Between
Whereas the IN keyword help people to limit the selection criteria to one or more discrete values, the BETWEEN keyword allows for selecting a range. The syntax for the BETWEEN clause is as follows:
SELECT "column_name"
FROM "table_name"
WHERE "column_name" BETWEEN 'value1' AND 'value2'
This will select all rows whose column has a value between 'value1' and 'value2'.
For example, we may wish to select view all sales information between January 6, 1999 , and January 10, 1999 , in Table Store_Information,
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
San Francisco $300 Jan-08-1999
Boston $700 Jan-08-1999

we key in,
SELECT *
FROM Store_Information
WHERE Date BETWEEN 'Jan-06-1999' AND 'Jan-10-1999'
Note that date may be stored in different formats in different databases. This tutorial simply choose one of the formats.
Result:
store_name Sales Date
San Diego $250 Jan-07-1999
San Francisco $300 Jan-08-1999
Boston $700 Jan-08-1999
Like
LIKE is another keyword that is used in the WHERE clause. Basically, LIKE allows you to do a search based on a pattern rather than specifying exactly what is desired (as in IN) or spell out a range (as in BETWEEN). The syntax for is as follows:
SELECT "column_name"
FROM "table_name"
WHERE "column_name" LIKE {PATTERN}
{PATTERN} often consists of wildcards. Here are some examples:

• 'A_Z': All string that starts with 'A', another character, and end with 'Z'. For example, 'ABZ' and 'A2Z' would both satisfy the condition, while 'AKKZ' would not (because there are two characters between A and Z instead of one).
• 'ABC%': All strings that start with 'ABC'. For example, 'ABCD' and 'ABCABC' would both satisfy the condition.
• '%XYZ': All strings that end with 'XYZ'. For example, 'WXYZ' and 'ZZXYZ' would both satisfy the condition.
• '%AN%': All string that contain the pattern 'AN' anywhere. For example, 'LOS ANGELES' and ' SAN FRANCISCO ' would both satisfy the condition.
Let's say we have the following table:
Table Store_Information
store_name Sales Date
LOS ANGELES $1500 Jan-05-1999
SAN DIEGO $250 Jan-07-1999
SAN FRANCISCO $300 Jan-08-1999
BOSTON $700 Jan-08-1999

We want to find all stores whose name contains 'AN'. To do so, we key in,
SELECT *
FROM Store_Information
WHERE store_name LIKE '%AN%'

Result:
store_name Sales Date
LOS ANGELES $1500 Jan-05-1999
SAN DIEGO $250 Jan-07-1999
SAN FRANCISCO $300 Jan-08-1999
Order By
So far, we have seen how to get data out of a table using SELECT and WHERE commands. Often, however, we need to list the output in a particular order. This could be in ascending order, in descending order, or could be based on either numerical value or text value. In such cases, we can use the ORDER BY keyword to achieve our goal.
The syntax for an ORDER BY statement is as follows:
SELECT "column_name"
FROM "table_name"
[WHERE "condition"]
ORDER BY "column_name" [ASC, DESC]
The [] means that the WHERE statement is optional. However, if a WHERE clause exists, it comes before the ORDER BY clause. ASC means that the results will be shown in ascending order, and DESC means that the results will be shown in descending order. If neither is specified, the default is ASC.
It is possible to order by more than one column. In this case, the ORDER BY clause above becomes
ORDER BY "column_name1" [ASC, DESC], "column_name2" [ASC, DESC]
Assuming that we choose ascending order for both columns, the output will be ordered in ascending order according to column 1. If there is a tie for the value of column 1, we the sort in ascending order by column 2.
For example, we may wish to list the contents of Table Store_Information by dollar amount, in descending order:
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
San Francisco $300 Jan-08-1999
Boston $700 Jan-08-1999

we key in,
SELECT store_name, Sales, Date
FROM Store_Information
ORDER BY Sales DESC

Result:
store_name Sales Date
Los Angeles $1500 Jan-05-1999
Boston $700 Jan-08-1999
San Francisco $300 Jan-08-1999
San Diego $250 Jan-07-1999
In addition to column name, we may also use column position (based on the SQL query) to indicate which column we want to apply the ORDER BY clause. The first column is 1, second column is 2, and so on. In the above example, we will achieve the same results by the following command:
SELECT store_name, Sales, Date
FROM Store_Information
ORDER BY 2 DESC
Aggregate Functions
Since we have started dealing with numbers, the next natural question to ask is if it is possible to do math on those numbers, such as summing them up or taking their average. The answer is yes! SQL has several arithematic functions, and they are:
- AVG
- COUNT
- MAX
- MIN
- SUM
The syntax for using functions is,
SELECT "function type"("column_ name")
FROM "table_name"
For example, if we want to get the sum of all sales from the following table,
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
Los Angeles $300 Jan-08-1999
Boston $700 Jan-08-1999
we would type in
SELECT SUM(Sales) FROM Store_Information

Result:
SUM(Sales)
$2750
$2750 represents the sum of all Sales entries: $1500 + $250 + $300 + $700.
In addition to using functions, it is also possible to use SQL to perform simple tasks such as addition (+) and subtraction (-). For character-type data, there are also several string functions available, such as concatenation, trim, and substring functions. Different RDBMS vendors have different string functions implementations, and it is best to consult the references for your RDBMS to see how these functions are used.
Count
Another arithmetic function is COUNT. This allows us to COUNT up the number of row in a certain table. The syntax is,
SELECT COUNT("column_ name")
FROM "table_name"
For example, if we want to find the number of store entries in our table,
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
Los Angeles $300 Jan-08-1999
Boston $700 Jan-08-1999
we'd key in
SELECT COUNT(store_ name)
FROM Store_Information
Result:
Count(store_ name)
4
COUNT and DISTINCT can be used together in a statement to fetch the number of distinct entries in a table. For example, if we want to find out the number of distinct stores, we'd type,
SELECT COUNT(DISTINCT store_name)
FROM Store_Information
Result:
Count(DISTINCT store_name)
3

Group By
Now we return to the aggregate functions. Remember we used the SUM keyword to calculate the total sales for all stores? What if we want to calculate the total sales for each store? Well, we need to do two things: First, we need to make sure we select the store name as well as total sales. Second, we need to make sure that all the sales figures are grouped by stores. The corresponding SQL syntax is,
SELECT "column_name1" , SUM("column_ name2")
FROM "table_name"
GROUP BY "column_name1"
Let's illustrate using the following table,
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
Los Angeles $300 Jan-08-1999
Boston $700 Jan-08-1999
We want to find total sales for each store. To do so, we would key in,
SELECT store_name, SUM(Sales)
FROM Store_Information
GROUP BY store_name
Result:
store_name SUM(Sales)
Los Angeles $1800
San Diego $250
Boston> $700
The GROUP BY keyword is used when we are selecting multiple columns from a table (or tables) and at least one arithmetic operator appears in the SELECT statement. When that happens, we need to GROUP BY all the other selected columns, i.e., all columns except the one(s) operated on by the arithmetic operator.
Having
Another thing people may want to do is to limit the output based on the corresponding sum (or any other aggregate functions). For example, we might want to see only the stores with sales over $1,500. Instead of using the WHERE clause in the SQL statement, though, we need to use the HAVING clause, which is reserved for aggregate functions. The HAVING clause is typically placed near the end of the SQL statement, and a SQL statement with the HAVING clause may or may not include the GROUP BY clause. The syntax for HAVING is,
SELECT "column_name1" , SUM("column_ name2")
FROM "table_name"
GROUP BY "column_name1"
HAVING (arithmetic function condition)
Note: the GROUP BY clause is optional.
In our example, table Store_Information,
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
Los Angeles $300 Jan-08-1999
Boston $700 Jan-08-1999
we would type,
SELECT store_name, SUM(sales)
FROM Store_Information
GROUP BY store_name
HAVING SUM(sales) > 1500
Result:
store_name SUM(Sales)
Los Angeles $1800
Alias
We next focus on the use of aliases. There are two types of aliases that are used most frequently: column alias and table alias.
In short, column aliases exist to help organizing output. In the previous example, whenever we see total sales, it is listed as SUM(sales). While this is comprehensible, we can envision cases where the column heading can be complicated (especially if it involves several arithmetic operations). Using a column alias would greatly make the output much more readable.
The second type of alias is the table alias. This is accomplished by putting an alias directly after the table name in the FROM clause. This is convenient when you want to obtain information from two separate tables (the technical term is 'perform joins'). The advantage of using a table alias when doing joins is readily apparent when we talk about joins.
Before we get into joins, though, let's look at the syntax for both the column and table aliases:
SELECT "table_alias" ."column_ name1" "column_alias"
FROM "table_name" "table_alias"
Briefly, both types of aliases are placed directly after the item they alias for, separate by a white space. We again use our table, Store_Information,
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
Los Angeles $300 Jan-08-1999
Boston $700 Jan-08-1999
We use the same example as that in the SQL GROUP BY section, except that we have put in both the column alias and the table alias:
SELECT A1.store_name Store, SUM(A1.Sales) "Total Sales"
FROM Store_Information A1
GROUP BY A1.store_name
Result:
Store Total Sales
Los Angeles $1800
San Diego $250
Boston $700
Notice that difference in the result: the column titles are now different. That is the result of using the column alias. Notice that instead of the somewhat cryptic "Sum(Sales)" , we now have "Total Sales", which is much more understandable, as the column header. The advantage of using a table alias is not apparent in this example. However, they will become evident in the next section.
Join
Now we want to look at joins. To do joins correctly in SQL requires many of the elements we have introduced so far. Let's assume that we have the following two tables,
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
Los Angeles $300 Jan-08-1999
Boston $700 Jan-08-1999
Table Geography
region_name store_name
East Boston
East New York
West Los Angeles
West San Diego
and we want to find out sales by region. We see that table Geography includes information on regions and stores, and table Store_Information contains sales information for each store. To get the sales information by region, we have to combine the information from the two tables. Examining the two tables, we find that they are linked via the common field, "store_name" . We will first present the SQL statement and explain the use of each segment later:
SELECT A1.region_name REGION, SUM(A2.Sales) SALES
FROM Geography A1, Store_Information A2
WHERE A1.store_name = A2.store_name
GROUP BY A1.region_name
Result:
REGION SALES
East $700
West $2050
The first two lines tell SQL to select two fields, the first one is the field "region_name" from table Geography (aliased as REGION), and the second one is the sum of the field "Sales" from table Store_Information (aliased as SALES). Notice how the table aliases are used here: Geography is aliased as A1, and Store_Information is aliased as A2. Without the aliasing, the first line would become
SELECT Geography.region_ name REGION, SUM(Store_Informati on.Sales) SALES
which is much more cumbersome. In essence, table aliases make the entire SQL statement easier to understand, especially when multiple tables are included.
Next, we turn our attention to line 3, the WHERE statement. This is where the condition of the join is specified. In this case, we want to make sure that the content in "store_name" in table Geography matches that in table Store_Information, and the way to do it is to set them equal. This WHERE statement is essential in making sure you get the correct output. Without the correct WHERE statement, a Cartesian Join will result. Cartesian joins will result in the query returning every possible combination of the two (or whatever the number of tables in the FROM statement) tables. In this case, a Cartesian join would result in a total of 4 x 4 = 16 rows being returned.
Outer Join
Previously, we had looked at left join, or inner join, where we select rows common to the participating tables to a join. What about the cases where we are interested in selecting elements in a table regardless of whether they are present in the second table? We will now need to use the SQL OUTER JOIN command.
The syntax for performing an outer join in SQL is database-dependent. For example, in Oracle, we will place an "(+)" in the WHERE clause on the other side of the table for which we want to include all the rows.
Let's assume that we have the following two tables,
Table Store_Information
store_name Sales Date
Los Angeles $1500 Jan-05-1999
San Diego $250 Jan-07-1999
Los Angeles $300 Jan-08-1999
Boston $700 Jan-08-1999
Table Geography
region_name store_name
East Boston
East New York
West Los Angeles
West San Diego
and we want to find out the sales amount for all of the stores. If we do a regular join, we will not be able to get what we want because we will have missed " New York ," since it does not appear in the Store_Information table. Therefore, we need to perform an outer join on the two tables above:
SELECT A1.store_name, SUM(A2.Sales) SALES
FROM Geography A1, Store_Information A2
WHERE A1.store_name = A2.store_name (+)
GROUP BY A1.store_name
Note that in this case, we are using the Oracle syntax for outer join.
Result:
store_name SALES
Boston $700
New York
Los Angeles $1800
San Diego $250
Note: NULL is returned when there is no match on the second table. In this case, " New York " does not appear in the table Store_Information, thus its corresponding "SALES" column is NULL.
Concatenate
Sometimes it is necessary to combine together (concatenate) the results from several different fields. Each database provides a way to do this:
• MySQL: CONCAT()
• Oracle: CONCAT(), ||
• SQL Server: +
The syntax for CONCAT() is as follows:
CONCAT(str1, str2, str3, ...): Concatenate str1, str2, str3, and any other strings together. Please note the Oracle CONCAT() function only allows two arguments -- only two strings can be put together at a time using this function. However, it is possible to concatenate more than two strings at a time in Oracle using '||'.
Let's look at some examples. Assume we have the following table:
Table Geography
region_name store_name
East Boston
East New York
West Los Angeles
West San Diego
Example 1:
MySQL/Oracle:
SELECT CONCAT(region_ name,store_ name) FROM Geography
WHERE store_name = ' Boston ';
Result:
'EastBoston'
Example 2:
Oracle:
SELECT region_name || ' ' || store_name FROM Geography
WHERE store_name = ' Boston ';
Result:
' East Boston '
Example 3:
SQL Server:
SELECT region_name + ' ' + store_name FROM Geography
WHERE store_name = ' Boston ';
Result:
' East Boston '
Substring
The Substring function in SQL is used to grab a portion of the stored data. This function is called differently for the different databases:
• MySQL: SUBSTR(), SUBSTRING()
• Oracle: SUBSTR()
• SQL Server: SUBSTRING()
The most frequent uses are as follows (we will use SUBSTR() here):
SUBSTR(str,pos): Select all characters from starting with position . Note that this syntax is not supported in SQL Server.
SUBSTR(str,pos, len): Starting with the th character in string and select the next characters.
Assume we have the following table:
Table Geography
region_name store_name
East Boston
East New York
West Los Angeles
West San Diego
Example 1:
SELECT SUBSTR(store_ name, 3)
FROM Geography
WHERE store_name = ' Los Angeles ';
Result:
's Angeles'
Example 2:
SELECT SUBSTR(store_ name,2,4)
FROM Geography
WHERE store_name = ' San Diego ';
Result:
'an D'
Trim
The TRIM function in SQL is used to remove specified prefix or suffix from a string. The most common pattern being removed is white spaces. This function is called differently in different databases:
• MySQL: TRIM(), RTRIM(), LTRIM()
• Oracle: RTRIM(), LTRIM()
• SQL Server: RTRIM(), LTRIM()
The syntax for these trim functions are:
TRIM([[LOCATION] [remstr] FROM ] str): [LOCATION] can be either LEADING, TRAILING, or BOTH. This function gets rid of the [remstr] pattern from either the beginning of the string or the end of the string, or both. If no [remstr] is specified, white spaces are removed.
LTRIM(str): Removes all white spaces from the beginning of the string.
RTRIM(str): Removes all white spaces at the end of the string.
Example 1:
SELECT TRIM(' Sample ');
Result:
'Sample'
Example 2:
SELECT LTRIM(' Sample ');
Result:
'Sample '
Example 3:
SELECT RTRIM(' Sample ');
Result:

UNIX Basics

Written on 9:24 PM by MURALI KRISHNA

UNIX BASICS
Main features of unix :

Multi user - More than one user can use the machine

Multitasking- More than one program can be run at a time.

Portability – This means the operating system can be easily converted to run on different browsers.

Commands
ls
when invoked without any arguments, lists the files in the current working directory. A directory that is not the current working directory can be specified and ls will list the files there. The user also may specify any list of files and directories. In this case, all files and all contents of specified directories will be listed.
Files whose names start with "." are not listed, unless the -a flag is specified or the files are specified explicitly.
Without options, ls displays files in a bare format. This bare format however makes it difficult to establish the type, permissions, and size of the files. The most common options to reveal this information or change the list of files are:
-l long format, displaying Unix file type, permissions, number of hard links, owner, group, size, date, and filename
-F appends a character revealing the nature of a file, for example, * for an executable, or / for a directory. Regular files have no suffix.
-a lists all files in the given directory, including those whose names start with "." By default, these files are excluded from the list.
-R recursively lists subdirectories. The command ls -R / would therefore list all files.

cd
Is a command line command used to change the current working directory in the Unix and DOS operating systems. It is also available for use in Unix shell scripts or DOS batch files. cd is frequently included built into certain shells such as the Bourne shell, tcsh, bash (where it calls the chdir() POSIX C function) and in DOS's COMMAND.COM.
A directory is a logical section of a filesystem used to hold files. Directories may also contain other directories. The cd command can be used to change into a subdirectory, move back into the parent directory, move all the way back to the root (/ in UNIX, \ in DOS) or move to any given directory.

pwd
command (print working directory) is used to print the name of current working directory from a computer's command-line interface. If the shell prompt does not already show this, the user can use this command to find their place in the directory tree. This command is found in the Unix family of operating systems and other flavors as well. The DOS equivalent is "CD" with no arguments.
It is a command which is sometimes included built into certain shells such as sh, and bash. It can be implemented easily with the POSIX C functions getcwd() and/or getwd().
Example:
$ pwd
/home/foobar

mkdir
command in the Unix operating system is used to make a new Directory. Normal usage is as straightforward as follows:
mkdir name_of_directory
Where name_of_directory is the name of the directory one wants to create. When typed as above (ie. normal usage), the new directory would be created within the current directory.
rm (short for remove)
is a Unix command used to delete files from a filesystem. Common options that rm accepts include:
-r, which processes subdirectories recursively
-i, which asks for every deletion to be confirmed
-f, which ignores non-existent files and overrides any confirmation prompts ("force")
rm is often aliased to "rm -i" so as to avoid accidental deletion of files. If a user still wishes to delete a large number of files without confirmation, they can manually cancel out the -i argument by adding the -f option.
"rm -rf" (variously, "rm -rf /", "rm -rf *", and others) is frequently used in jokes and anecdotes about Unix disasters. The "rm -rf /" variant of the command, if run by an administrator, would cause the contents of every mounted disk on the computer to be deleted.

rmdir
is a command which will remove an empty directory on a Unix-system. It cannot be capitalized. Normal usage is straightforward where one types:
rmdir name_of_directory
Where name_of_directory corresponds with the name of the directory one wishes to delete. There are options to this command such as -p which removes parent directories if they are also empty.
For example:
rmdir –p foo/bar/baz
Will first remove baz/, then bar/ and finally foo/ thus removing the entire directory tree specified in the command argument.
Often rmdir will not remove a directory if there is still files present in the directory. To force the removal of the directory even if files are present usually the -rf flag can be used. For example:
rmdir -Rf for/bar/baz


cp
is the command entered in a Unix shell to copy a file from one place to another, possibly on a different filesystem. The original file remains unchanged, and the new file may have the same or a different name.
To Copy a File to another File
cp [ -f ] [ -h ] [ -i ] [ -p ][ -- ] SourceFile TargetFile
To Copy a File to a Directory
cp [ -f ] [ -h ] [ -i ] [ -p ] [ -r | -R ] [ -- ] SourceFile ... TargetDirectory
To Copy a Directory to a Directory
cp [ -f ] [ -h ] [ -i ] [ -p ] [ -- ] { -r | -R } SourceDirectory ... TargetDirectory
-f (force) – specifies removal of the target file if it cannot be opened for write operations. The removal precedes any copying performed by the cp command.
-h – makes the cp command copy symbolic links. The default is to follow symbolic links, that is, to copy files to which symbolic links point.
-i (interactive) – prompts you with the name of a file to be overwritten. This occurs if the TargetDirectory or TargetFile parameter contains a file with the same name as a file specified in the SourceFile or SourceDirectory parameter. If you enter y or the locale's equivalent of y, the cp command continues. Any other answer prevents the cp command from overwriting the file.
-p (preserve) – duplicates the following characteristics of each SourceFile/SourceDirectory in the corresponding TargetFile and/or TargetDirectory:
Examples
To make a copy of a file in the current directory, enter:
cp prog.c prog.bak
This copies prog.c to prog.bak. If the prog.bak file does not already exist, the cp command creates it. If it does exist, the cp command replaces it with a copy of the prog.c file.
To copy a file in your current directory into another directory, enter:
cp jones /home/nick/clients
This copies the jones file to /home/nick/clients/jones.
To copy a file to a new file and preserve the modification date, time, and access control list associated with the source file, enter:
cp -p smith smith.jr
This copies the smith file to the smith.jr file. Instead of creating the file with the current date and time stamp, the system gives the smith.jr file the same date and time as the smith file. The smith.jr file also inherits the smith file's access control protection.
To copy all the files in a directory to a new directory, enter:
cp /home/janet/clients/* /home/nick/customers
This copies only the files in the clients directory to the customers directory.
To copy a directory, including all its files and subdirectories, to another directory, enter:
cp -R /home/nick/clients /home/nick/customers
This copies the clients directory, including all its files, subdirectories, and the files in those subdirectories, to the customers/clients directory.
To copy a specific set of files to another directory, enter:
cp jones lewis smith /home/nick/clients
This copies the jones, lewis, and smith files in your current working directory to the /home/nick/clients directory.
To use pattern-matching characters to copy files, enter:
cp programs/*.c .
This copies the files in the programs directory that end with .c to the current directory, signified by the single . (dot). You must type a space between the c and the final dot.



find


program is a search utility, mostly found on Unix-like platforms. It searches through a directory tree of a filesystem, locating files based on some user-specified criteria. By default, find returns all files below the current working directory. Further, find allows the user to specify an action to be taken on each matched file. Thus, it is an extremely powerful program for applying actions to many files. It also supports regexp matching.

Examples
From current directory
find . -name my\*
This searches in the current directory (represented by a period) and below it, for files and directories with names starting with my. The backslash before the star is needed to avoid the shell expansion. Without the backslash, the shell would replace my* with the list of files whose names begin with my in the current directory. An alternative is to enclose the the arguments in quotes: find . -name "my*"
Files only
find . -name "my*" -type f
This limits the results of the above search to only regular files, therefore excluding directories, special files, pipes, symbolic links, etc. my* is enclosed in quotes as otherwise the shell would replace it with the list of files in the current directory starting with my
Commands
The previous examples created listings of results because, by default, find executes the '-print' action. (Note that early versions of the find command had no default action at all; therefore the resulting list of files would be discarded, to the bewilderment of naïve users.)
find . -name "my*" -type f -ls
This prints an extended file information.
Search all directories
find / -name "myfile" -type f -print
This searches every file on the computer for a file with the name myfile. It is generally not a good idea to look for data files this way. This can take a considerable amount of time, so it is best to specify the directory more precisely.
Specify a directory
find /home/brian -name "myfile" -type f -print
This searches for files named myfile in the /home/brian directory, which is the home directory for the user brian. You should always specify the directory to the deepest level you can remember.
Find any one of differently named files
find . ( -name "*jsp" -or -name "*java" ) -type f -ls
This prints extended information on any file whose name ends with either 'jsp' or 'java'. Note that the parentheses are required. Also note that the operator "or" can be abbreviated as "o". The "and" operator is assumed where no operator is given. In many shells the parentheses must be escaped with a backslash, "\(" and "\)", to prevent them from being interpreted as special shell characters.

touch

is a program on Unix and Unix-like systems used to change a file's date- and time-stamp. It can also be used to create an empty file. The command-syntax is:
touch [options]
If the file exists, its access and modification time-stamps are set to the system's current date and time, as if the file had been changed. To touch a file simulates a change to the file. If the file does not exist, an empty file of that name is created with its access and modification time-stamps set to the system's current date and time. If no file path is specified, the current directory is assumed.
touch can be invoked with options to change its behaviour, which may vary from one Unix to another. One option makes it possible to set the file's time-stamp to something other than the current system date and time, but this action is normally restricted to the owner of the file or the system's superuser.

echo
is a command in Unix (and by extension, its descendants, such as Linux) and MS-DOS that places a string on the terminal. It is typically used in shell scripts and batch programs to output status text to the screen or a file.
$ echo This is a test.
This is a test.
$ echo "This is a test." > ./test.txt
$ cat ./test.txt
This is a test.

cat
program concatenates the contents of files, reading from a list of files and/or standard input in sequence and writing their contents in order to standard output. cat takes the list of files as arguments but also interprets the argument "-" as standard input.
Example: cat filename

who
The Unix command who displays a list of users who are currently logged into a computer. The command accepts various options that vary by system to further specify the information that is returned, such as the length of time a particular user has been connected or what pseudo-teletype a user is connected to. The who command is related to the command w, which provides the same information but also displays additional data and statistics.
Example output
user19 pts/35 Apr 18 08:40 (localhost)
user28 pts/27 Apr 18 09:50 (localhost)


du (abbreviated from disk usage)
is a Unix computer program to display the amount of disk space used under a particular directory or files on a file system.
du counts the disk space by walking the directory tree. As such, the amount of space on a file system shown by du may vary from that shown by df if files have been deleted but their blocks not yet freed.
In Linux, it is a part of the GNU Coreutils package.
The du utility first appeared in version 1 of AT&T UNIX.
Example
The -k flag will show the sizes in 1K blocks, rather than the default of 512 byte blocks.
$du -k /seclog
4 /seclog/lost+found
132 /seclog/backup/aix7
136 /seclog/backup
44044 /seclog/temp
439264 /seclog