SOAP, originally defined as Simple Object Access Protocol, is a protocol specification for exchanging structured information in the implementation of Web Services in computer networks. It relies on Extensible Markup Language (XML) for its message format, and usually relies on other Application Layer protocols, most notably Remote Procedure Call (RPC) and HyperText Transfer Protocol (HTTP), for message negotiation and transmission. SOAP can form the foundation layer of a web services protocol stack, providing a basic messaging framework upon which web services can be built. This XML based protocol consists of three parts: an envelope, which defines what is in the message and how to process it, a set of encoding rules for expressing instances of application-defined datatypes, and a convention for representing procedure calls and responses.
As a layman's example of how SOAP procedures can be used, a SOAP message could be sent to a web-service-enabled web site, for example, a real-estate price database, with the parameters needed for a search. The site would then return an XML-formatted document with the resulting data , e.g., prices, location, features. Because the data is returned in a standardized machine-parseable format, it could then be integrated directly into a third-party web site or application.
The SOAP architecture consists of several layers of specifications: for message format, Message Exchange Patterns (MEP), underlying transport protocol bindings, message processing models, and protocol extensibility. SOAP is the successor of XML-RPC, though it borrows its transport and interaction neutrality and the envelope/header/body from elsewhere (probably from WDDX).[speculation?]
SOAP structure
SOAP once stood for 'Simple Object Access Protocol' but this acronym was dropped with Version 1.2 of the standard.[1] Version 1.2 became a W3C recommendation on June 24, 2003. The acronym is sometimes confused with SOA, which stands for Service-oriented architecture; however SOAP is different from SOA.
SOAP was originally designed by Dave Winer, Don Box, Bob Atkinson, and Mohsen Al-Ghosein in 1998 in a project for Microsoft (where Atkinson and Al-Ghosein were already working at the time)[2], as an object-access protocol. The SOAP specification is currently maintained by the XML Protocol Working Group of the World Wide Web Consortium.
After SOAP was first introduced, it became the underlying layer of a more complex set of Web Services, based on Web Services Description Language (WSDL) and Universal Description Discovery and Integration (UDDI). These services, especially UDDI, have proved to be of far less interest, but an appreciation of them gives a fuller understanding of the expected role of SOAP compared to how web services have actually developed.
The SOAP specification
The SOAP specification defines the messaging framework which consists of:
• The SOAP processing model defining the rules for processing a SOAP message
• The SOAP extensibility model defining the concepts of SOAP features and SOAP modules
• The SOAP underlying protocol binding framework describing the rules for defining a binding to an underlying protocol that can be used for exchanging SOAP messages between SOAP nodes
• The SOAP message construct defining the structure of a SOAP message
SOAP processing model
The SOAP processing model describes a distributed processing model, its participants, the SOAP nodes and how a SOAP receiver processes a SOAP message. The following SOAP nodes are defined:
• SOAP sender
A SOAP node that transmits a SOAP message.
• SOAP receiver
A SOAP node that accepts a SOAP message.
• SOAP message path
The set of SOAP nodes through which a single SOAP message passes.
• Initial SOAP sender (Originator)
The SOAP sender that originates a SOAP message at the starting point of a SOAP message path.
• SOAP intermediary
A SOAP intermediary is both a SOAP receiver and a SOAP sender and is targetable from within a SOAP message. It processes the SOAP header blocks targeted at it and acts to forward a SOAP message towards an ultimate SOAP receiver.
• Ultimate SOAP receiver
The SOAP receiver that is a final destination of a SOAP message. It is responsible for processing the contents of the SOAP body and any SOAP header blocks targeted at it. In some circumstances, a SOAP message might not reach an ultimate SOAP receiver, for example because of a problem at a SOAP intermediary. An ultimate SOAP receiver cannot also be a SOAP intermediary for the same SOAP message.
Share Knowledge Around
Sharing is a method to disseminate useful information to other people. Knowledge is closely linked to our identity hence, it is imperative that our peers view us as knowledgeable and skilful. One major way of demonstrating that to our peers is by sharing knowledge with them. Knowledge sharing and relationships are interlinked. This blog will keep you posted on latest news and interesting things happening around you.
Thursday, August 12, 2010
Types of software Testing
Types of software Testing
Software Testing Types:
Black box testing – Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.
White box testing – This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing – Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses.
Incremental integration testing – Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.
Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.
System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.
End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.
Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.
Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.
Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.
Performance testing – Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.
Usability testing – User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.
Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.
Recovery testing – Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Security testing – Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.
Compatibility testing – Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above.
Comparison testing – Comparison of product strengths and weaknesses with previous versions or other similar products.
Alpha testing – In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.
Beta testing – Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.
Happy Testing!!!
Software Testing Types:
Black box testing – Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.
White box testing – This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing – Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses.
Incremental integration testing – Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.
Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.
System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.
End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.
Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.
Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.
Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.
Performance testing – Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.
Usability testing – User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.
Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.
Recovery testing – Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Security testing – Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.
Compatibility testing – Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above.
Comparison testing – Comparison of product strengths and weaknesses with previous versions or other similar products.
Alpha testing – In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.
Beta testing – Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.
Happy Testing!!!
Wednesday, August 11, 2010
“ITIL”
1 What is ITIL and what are its origins?
It is hard to believe that the IT Infrastructure Library or ITIL® is 20 years old. On its third version now, ITIL is the most widely adopted framework for IT Service Management in the world. It is a practical, no-nonsense approach to the identification, planning, delivery and support of IT services to the business.
In the early 80’s, the evolution of computing technology moved from mainframe-centric infrastructure and centralized IT organizations to distributed computing and geographically dispersed resources. While the ability to distribute technology afforded organizations more flexibility, the side effect was inconsistent application of processes for technology delivery and support. The UKs Office of Government Commerce recognized that utilizing consistent practices for all aspects of a service lifecycle could assist in driving organizational effectiveness and efficiency as well as predictable service levels and thus, ITIL was born. ITIL guidance has since been a successful mechanism to drive consistency, efficiency and excellence into the business of managing IT services.
Since ITIL is an approach to IT “service” management”, the concept of a service must be discussed. A service is something that provides value to customers. Services that customers can directly utilize or consume are known as “business” services. An example of a business service that has common applicability across industries would be Payroll. Payroll is an IT service that is used to consolidate information, calculate compensation and generate paychecks on a regular periodic basis. Payroll may rely on other “business” services such as “Time Tracking” or “Benefits Administration” for information necessary to calculate the correct compensation for an employee during a given time period.
In order for Payroll to run, it is supported by a number of technology or “infrastructure” services. An infrastructure service does its work in the background, such that the business does not directly interact with it, but technology services are necessary as part of the overall value chain of the business service. “Server Administration”, “Database Administration”, “Storage Administration” are all examples of technology services required for the successful delivery of the Payroll business service.
See Figure 1.
IT has traditionally been focused on the “infrastructure” services and managing the technology silos. IT service Management guidance in ITIL suggests a more holistic approach to managing services from end-to-end. Managing the entire business service along with its underlying components cohesively assures that we are considering every aspect of a service (and not just the individual technology silos) – to assure that we are delivering the required functionality (or utility – accurate paychecks for all employees) and service levels (or warranty – delivered within a certain timeframe, properly secured, available when necessary) to the business customer.
ITIL is typically used in conjunction with one or more other good practices to manage information technology such as:
COBIT (a framework for IT G • governance and Controls)
• Six Sigma ( a quality methodology)
• TOGAF (a framework for IT architecture)
• ISO 27000 (a standard for IT security)
The Service Lifecycle
ITIL is organized around a Service Lifecycle: which includes:
Service Strategy, Service Design, Service Transition, Service Operation and Continual Service Improvement. The lifecycle starts with Service Strategy – understanding who the IT customers are, the service offerings that are required to meet the customers’ needs, the IT capabilities and resource that are required to develop these offerings and the requirements for executing successfully. Driven through strategy and throughout the course of delivery and support of the service, IT must always try to assure that cost of delivery is consistent with the value delivered to the customer.
Service Design assures that new and changes services are designed effectively to meet customer expectations. The technology and architecture required to meet customer needs cost effectively is an integral part of Service Design. Additionally, processes required to manage services are also part of the design phase.
Service management systems and tools that are necessary to adequately monitor and support new or modified services must be considered as well as mechanisms for measuring service levels, technology and process efficiency and effectiveness. Through the Service Transition phase of the lifecycle the design is built, tested and moved into production to assure that the business customer can achieve the desired value. This phase addresses managing changes, controlling the assets and configuration items (underlying components – hardware, software, etc) associated with new and changed systems,
Figure 1 – The End-To-End Service
service validation and testing and transition planning to assure that users, support personnel and the production environment has been prepared for the release to production. Once transitioned, Service Operation then delivers the service on an ongoing basis, overseeing the daily overall health of the service. This includes managing disruptions to service through rapid restoration of incidents, determining the root cause of problems and detecting trends associated with recurring issues, handling daily routine end user requests and managing service access.
Enveloping the Service Lifecycle is Continual Service Improvement(CSI). CSI offers a mechanism for IT to measure and improve the service levels, the technology and the efficiency and effectiveness or processes used in the overall management of services.
2 Why would an organization be interested in ITIL?
Although today’s technologies allow us to be able to provide robust capabilities and afford significant flexibility, they are very complex. The global reach available to companies via the internet provides tremendous business opportunity while presenting additional challenges regarding the confidentiality, integrity and availability or our services and our data. Additionally, IT organizations need to continue to be able to meet or exceed service expectations while working as efficiently as possible. Consistent repeatable processes are the key to efficiency, effectiveness and the ability to improve services. These consistent, repeatable processes are outlined in the ITIL framework.
It is hard to believe that the IT Infrastructure Library or ITIL® is 20 years old. On its third version now, ITIL is the most widely adopted framework for IT Service Management in the world. It is a practical, no-nonsense approach to the identification, planning, delivery and support of IT services to the business.
In the early 80’s, the evolution of computing technology moved from mainframe-centric infrastructure and centralized IT organizations to distributed computing and geographically dispersed resources. While the ability to distribute technology afforded organizations more flexibility, the side effect was inconsistent application of processes for technology delivery and support. The UKs Office of Government Commerce recognized that utilizing consistent practices for all aspects of a service lifecycle could assist in driving organizational effectiveness and efficiency as well as predictable service levels and thus, ITIL was born. ITIL guidance has since been a successful mechanism to drive consistency, efficiency and excellence into the business of managing IT services.
Since ITIL is an approach to IT “service” management”, the concept of a service must be discussed. A service is something that provides value to customers. Services that customers can directly utilize or consume are known as “business” services. An example of a business service that has common applicability across industries would be Payroll. Payroll is an IT service that is used to consolidate information, calculate compensation and generate paychecks on a regular periodic basis. Payroll may rely on other “business” services such as “Time Tracking” or “Benefits Administration” for information necessary to calculate the correct compensation for an employee during a given time period.
In order for Payroll to run, it is supported by a number of technology or “infrastructure” services. An infrastructure service does its work in the background, such that the business does not directly interact with it, but technology services are necessary as part of the overall value chain of the business service. “Server Administration”, “Database Administration”, “Storage Administration” are all examples of technology services required for the successful delivery of the Payroll business service.
See Figure 1.
IT has traditionally been focused on the “infrastructure” services and managing the technology silos. IT service Management guidance in ITIL suggests a more holistic approach to managing services from end-to-end. Managing the entire business service along with its underlying components cohesively assures that we are considering every aspect of a service (and not just the individual technology silos) – to assure that we are delivering the required functionality (or utility – accurate paychecks for all employees) and service levels (or warranty – delivered within a certain timeframe, properly secured, available when necessary) to the business customer.
ITIL is typically used in conjunction with one or more other good practices to manage information technology such as:
COBIT (a framework for IT G • governance and Controls)
• Six Sigma ( a quality methodology)
• TOGAF (a framework for IT architecture)
• ISO 27000 (a standard for IT security)
The Service Lifecycle
ITIL is organized around a Service Lifecycle: which includes:
Service Strategy, Service Design, Service Transition, Service Operation and Continual Service Improvement. The lifecycle starts with Service Strategy – understanding who the IT customers are, the service offerings that are required to meet the customers’ needs, the IT capabilities and resource that are required to develop these offerings and the requirements for executing successfully. Driven through strategy and throughout the course of delivery and support of the service, IT must always try to assure that cost of delivery is consistent with the value delivered to the customer.
Service Design assures that new and changes services are designed effectively to meet customer expectations. The technology and architecture required to meet customer needs cost effectively is an integral part of Service Design. Additionally, processes required to manage services are also part of the design phase.
Service management systems and tools that are necessary to adequately monitor and support new or modified services must be considered as well as mechanisms for measuring service levels, technology and process efficiency and effectiveness. Through the Service Transition phase of the lifecycle the design is built, tested and moved into production to assure that the business customer can achieve the desired value. This phase addresses managing changes, controlling the assets and configuration items (underlying components – hardware, software, etc) associated with new and changed systems,
Figure 1 – The End-To-End Service
service validation and testing and transition planning to assure that users, support personnel and the production environment has been prepared for the release to production. Once transitioned, Service Operation then delivers the service on an ongoing basis, overseeing the daily overall health of the service. This includes managing disruptions to service through rapid restoration of incidents, determining the root cause of problems and detecting trends associated with recurring issues, handling daily routine end user requests and managing service access.
Enveloping the Service Lifecycle is Continual Service Improvement(CSI). CSI offers a mechanism for IT to measure and improve the service levels, the technology and the efficiency and effectiveness or processes used in the overall management of services.
2 Why would an organization be interested in ITIL?
Although today’s technologies allow us to be able to provide robust capabilities and afford significant flexibility, they are very complex. The global reach available to companies via the internet provides tremendous business opportunity while presenting additional challenges regarding the confidentiality, integrity and availability or our services and our data. Additionally, IT organizations need to continue to be able to meet or exceed service expectations while working as efficiently as possible. Consistent repeatable processes are the key to efficiency, effectiveness and the ability to improve services. These consistent, repeatable processes are outlined in the ITIL framework.
MS-DOS commands”
MS-DOS and command line overview
Below is a listing of each of the MS-DOS commands currently listed on Computer Hope and a brief explanation of what each of the commands do. The below commands are all MS-DOS commands, which means not all the below commands will work in your version of MS-DOS and/or Windows command line. Clicking on the command will open the help page for that command with full details about it.
Command Description Type
ansi.sys
Defines functions that change display graphics, control cursor movement, and reassign keys. File
append
Causes MS-DOS to look in other directories when editing a file or running a command. External
arp
Displays, adds, and removes arp information from network devices.
External
assign
Assign a drive letter to an alternate letter. External
assoc
View the file associations. Internal
at
Schedule a time to execute commands or programs. External
atmadm
Lists connections and addresses seen by Windows ATM call manager.
Internal
attrib
Display and change file attributes. External
batch
Recovery console command that executes a series of commands in a file. Recovery
bootcfg
Recovery console command that allows a user to view, modify, and rebuild the boot.ini Recovery
break
Enable / disable CTRL + C feature. Internal
cacls
View and modify file ACL's.
External
call
Calls a batch file from another batch file. Internal
cd
Changes directories. Internal
chcp
Supplement the International keyboard and character set information. External
chdir
Changes directories. Internal
chdsk
Check the hard disk drive running FAT for errors.
External
chkntfs
Check the hard disk drive running NTFS for errors.
External
choice
Specify a listing of multiple options within a batch file. External
cls
Clears the screen. Internal
cmd
Opens the command interpreter.
color
Easily change the foreground and background color of the MS-DOS window. Internal
command
Opens the command interpreter.
comp
Compares files. External
compact
Compresses and uncompress files. External
control
Open Control Panel icons from the MS-DOS prompt. External
convert
Convert FAT to NTFS.
External
copy
Copy one or more files to an alternate location. Internal
ctty
Change the computers input/output devices. Internal
date
View or change the systems date. Internal
debug
Debug utility to create assembly programs to modify hardware settings. External
defrag
Re-arrange the hard disk drive to help with loading programs. External
del
Deletes one or more files. Internal
delete
Recovery console command that deletes a file. Internal
deltree
Deletes one or more files and/or directories. External
dir
List the contents of one or more directory. Internal
disable
Recovery console command that disables Windows system services or drivers. Recovery
diskcomp
Compare a disk with another disk. External
diskcopy
Copy the contents of one disk and place them on another disk. External
doskey
Command to view and execute commands that have been run in the past. External
dosshell
A GUI to help with early MS-DOS users. External
drivparm
Enables overwrite of original device drivers. Internal
echo
Displays messages and enables and disables echo. Internal
edit
View and edit files. External
edlin
View and edit files. External
emm386
Load extended Memory Manager. External
ename
Recovery console command to enable a disable service or driver. Recovery
endlocal
Stops the localization of the environment changes enabled by the setlocal command.
Internal
erase
Erase files from computer. Internal
exit
Exit from the command interpreter. Internal
expand
Expand a Microsoft Windows file back to it's original format. External
extract
Extract files from the Microsoft Windows cabinets. External
fasthelp
Displays a listing of MS-DOS commands and information about them. External
fc
Compare files. External
fdisk
Utility used to create partitions on the hard disk drive. External
find
Search for text within a file. External
findstr
Searches for a string of text within a file. External
fixboot
Writes a new boot sector. Recovery
fixmbr
Writes a new boot record to a disk drive. Recovery
for
Boolean used in batch files. Internal
format
Command to erase and prepare a disk drive. External
ftp
Command to connect and operate on a FTP server.
External
ftype
Displays or modifies file types used in file extension associations. Recovery
goto
Moves a batch file to a specific label or location. Internal
graftabl
Show extended characters in graphics mode. External
help
Display a listing of commands and brief explanation. External
if
Allows for batch files to perform conditional processing.
Internal
ifshlp.sys
32-bit file manager. External
ipconfig
Network command to view network adapter settings and assigned values. External
keyb
Change layout of keyboard. External
label
Change the label of a disk drive. External
lh
Load a device driver in to high memory. Internal
listsvc
Recovery console command that displays the services and drivers. Recovery
loadfix
Load a program above the first 64k. External
loadhigh
Load a device driver in to high memory. Internal
lock
Lock the hard disk drive. Internal
logoff
Logoff the currently profile using the computer. External
logon
Recovery console command to list installations and enable administrator login. Recovery
map
Displays the device name of a drive. Recovery
md
Command to create a new directory. Internal
mem
Display memory on system. External
mkdir
Command to create a new directory. Internal
mode
Modify the port or display settings. External
more
Display one page at a time. External
move
Move one or more files from one directory to another directory. Internal
msav
Early Microsoft Virus scanner. External
msd
Diagnostics utility. External
msdex
Utility used to load and provide access to the CD-ROM. External
nbtstat
Displays protocol statistics and current TCP/IP connections using NBT
External
net
Update, fix, or view the network or network settings External
netsh
Configure dynamic and static network information from MS-DOS. External
netstat
Display the TCP/IP network protocol statistics and information.
External
nlsfunc
Load country specific information. External
nslookup
Look up an IP address of a domain or host on a network. External
path
View and modify the computers path location. Internal
pathping
View and locate locations of network latency. External
pause
Command used in batch files to stop the processing of a command. Internal
ping
Test / send information to another network computer or network device. External
popd
Changes to the directory or network path stored by the pushd command.
Internal
power
Conserve power with computer portables. External
print
Prints data to a printer port. External
prompt
View and change the MS-DOS prompt.
Internal
pushd
Stores a directory or network path in memory so it can be returned to at any time. Internal
qbasic
Open the QBasic. External
rd
Removes an empty directory. Internal
ren
Renames a file or directory. Internal
rename
Renames a file or directory. Internal
rmdir
Removes an empty directory. Internal
route
View and configure windows network route tables. External
runas
Enables a user to run a program as a different user. External
scandisk
Run the scandisk utility. External
scanreg
Scan registry and recover registry from errors. External
set
Change one variable or string to another. Internal
setlocal
Enables local environments to be changed without affecting anything else. Internal
setver
Change MS-DOS version to trick older MS-DOS programs. External
share
Installs support for file sharing and locking capabilities. External
shift
Changes the position of replaceable parameters in a batch program. Internal
shutdown
Shutdown the computer from the MS-DOS prompt. External
smartdrv
Create a disk cache in conventional memory or extended memory. External
sort
Sorts the input and displays the output to the screen. External
start
Start a separate window in Windows from the MS-DOS prompt. Internal
subst
Substitute a folder on your computer for another drive letter. External
switches
Remove add functions from MS-DOS. Internal
sys
Transfer system files to disk drive. External
telnet
Telnet to another computer / device from the prompt.
External
time
View or modify the system time. Internal
title
Change the title of their MS-DOS window. Internal
tracert
Visually view a network packets route across a network. External
tree
View a visual tree of the hard disk drive. External
type
Display the contents of a file. Internal
undelete
Undelete a file that has been deleted. External
unformat
Unformat a hard disk drive. External
unlock
Unlock a disk drive. Internal
ver
Display the version information. Internal
verify
Enables or disables the feature to determine if files have been written properly. Internal
vol
Displays the volume information about the designated drive. Internal
xcopy
Copy multiple files, directories, and/or drives from one location to another. External
Below is a listing of each of the MS-DOS commands currently listed on Computer Hope and a brief explanation of what each of the commands do. The below commands are all MS-DOS commands, which means not all the below commands will work in your version of MS-DOS and/or Windows command line. Clicking on the command will open the help page for that command with full details about it.
Command Description Type
ansi.sys
Defines functions that change display graphics, control cursor movement, and reassign keys. File
append
Causes MS-DOS to look in other directories when editing a file or running a command. External
arp
Displays, adds, and removes arp information from network devices.
External
assign
Assign a drive letter to an alternate letter. External
assoc
View the file associations. Internal
at
Schedule a time to execute commands or programs. External
atmadm
Lists connections and addresses seen by Windows ATM call manager.
Internal
attrib
Display and change file attributes. External
batch
Recovery console command that executes a series of commands in a file. Recovery
bootcfg
Recovery console command that allows a user to view, modify, and rebuild the boot.ini Recovery
break
Enable / disable CTRL + C feature. Internal
cacls
View and modify file ACL's.
External
call
Calls a batch file from another batch file. Internal
cd
Changes directories. Internal
chcp
Supplement the International keyboard and character set information. External
chdir
Changes directories. Internal
chdsk
Check the hard disk drive running FAT for errors.
External
chkntfs
Check the hard disk drive running NTFS for errors.
External
choice
Specify a listing of multiple options within a batch file. External
cls
Clears the screen. Internal
cmd
Opens the command interpreter.
color
Easily change the foreground and background color of the MS-DOS window. Internal
command
Opens the command interpreter.
comp
Compares files. External
compact
Compresses and uncompress files. External
control
Open Control Panel icons from the MS-DOS prompt. External
convert
Convert FAT to NTFS.
External
copy
Copy one or more files to an alternate location. Internal
ctty
Change the computers input/output devices. Internal
date
View or change the systems date. Internal
debug
Debug utility to create assembly programs to modify hardware settings. External
defrag
Re-arrange the hard disk drive to help with loading programs. External
del
Deletes one or more files. Internal
delete
Recovery console command that deletes a file. Internal
deltree
Deletes one or more files and/or directories. External
dir
List the contents of one or more directory. Internal
disable
Recovery console command that disables Windows system services or drivers. Recovery
diskcomp
Compare a disk with another disk. External
diskcopy
Copy the contents of one disk and place them on another disk. External
doskey
Command to view and execute commands that have been run in the past. External
dosshell
A GUI to help with early MS-DOS users. External
drivparm
Enables overwrite of original device drivers. Internal
echo
Displays messages and enables and disables echo. Internal
edit
View and edit files. External
edlin
View and edit files. External
emm386
Load extended Memory Manager. External
ename
Recovery console command to enable a disable service or driver. Recovery
endlocal
Stops the localization of the environment changes enabled by the setlocal command.
Internal
erase
Erase files from computer. Internal
exit
Exit from the command interpreter. Internal
expand
Expand a Microsoft Windows file back to it's original format. External
extract
Extract files from the Microsoft Windows cabinets. External
fasthelp
Displays a listing of MS-DOS commands and information about them. External
fc
Compare files. External
fdisk
Utility used to create partitions on the hard disk drive. External
find
Search for text within a file. External
findstr
Searches for a string of text within a file. External
fixboot
Writes a new boot sector. Recovery
fixmbr
Writes a new boot record to a disk drive. Recovery
for
Boolean used in batch files. Internal
format
Command to erase and prepare a disk drive. External
ftp
Command to connect and operate on a FTP server.
External
ftype
Displays or modifies file types used in file extension associations. Recovery
goto
Moves a batch file to a specific label or location. Internal
graftabl
Show extended characters in graphics mode. External
help
Display a listing of commands and brief explanation. External
if
Allows for batch files to perform conditional processing.
Internal
ifshlp.sys
32-bit file manager. External
ipconfig
Network command to view network adapter settings and assigned values. External
keyb
Change layout of keyboard. External
label
Change the label of a disk drive. External
lh
Load a device driver in to high memory. Internal
listsvc
Recovery console command that displays the services and drivers. Recovery
loadfix
Load a program above the first 64k. External
loadhigh
Load a device driver in to high memory. Internal
lock
Lock the hard disk drive. Internal
logoff
Logoff the currently profile using the computer. External
logon
Recovery console command to list installations and enable administrator login. Recovery
map
Displays the device name of a drive. Recovery
md
Command to create a new directory. Internal
mem
Display memory on system. External
mkdir
Command to create a new directory. Internal
mode
Modify the port or display settings. External
more
Display one page at a time. External
move
Move one or more files from one directory to another directory. Internal
msav
Early Microsoft Virus scanner. External
msd
Diagnostics utility. External
msdex
Utility used to load and provide access to the CD-ROM. External
nbtstat
Displays protocol statistics and current TCP/IP connections using NBT
External
net
Update, fix, or view the network or network settings External
netsh
Configure dynamic and static network information from MS-DOS. External
netstat
Display the TCP/IP network protocol statistics and information.
External
nlsfunc
Load country specific information. External
nslookup
Look up an IP address of a domain or host on a network. External
path
View and modify the computers path location. Internal
pathping
View and locate locations of network latency. External
pause
Command used in batch files to stop the processing of a command. Internal
ping
Test / send information to another network computer or network device. External
popd
Changes to the directory or network path stored by the pushd command.
Internal
power
Conserve power with computer portables. External
Prints data to a printer port. External
prompt
View and change the MS-DOS prompt.
Internal
pushd
Stores a directory or network path in memory so it can be returned to at any time. Internal
qbasic
Open the QBasic. External
rd
Removes an empty directory. Internal
ren
Renames a file or directory. Internal
rename
Renames a file or directory. Internal
rmdir
Removes an empty directory. Internal
route
View and configure windows network route tables. External
runas
Enables a user to run a program as a different user. External
scandisk
Run the scandisk utility. External
scanreg
Scan registry and recover registry from errors. External
set
Change one variable or string to another. Internal
setlocal
Enables local environments to be changed without affecting anything else. Internal
setver
Change MS-DOS version to trick older MS-DOS programs. External
share
Installs support for file sharing and locking capabilities. External
shift
Changes the position of replaceable parameters in a batch program. Internal
shutdown
Shutdown the computer from the MS-DOS prompt. External
smartdrv
Create a disk cache in conventional memory or extended memory. External
sort
Sorts the input and displays the output to the screen. External
start
Start a separate window in Windows from the MS-DOS prompt. Internal
subst
Substitute a folder on your computer for another drive letter. External
switches
Remove add functions from MS-DOS. Internal
sys
Transfer system files to disk drive. External
telnet
Telnet to another computer / device from the prompt.
External
time
View or modify the system time. Internal
title
Change the title of their MS-DOS window. Internal
tracert
Visually view a network packets route across a network. External
tree
View a visual tree of the hard disk drive. External
type
Display the contents of a file. Internal
undelete
Undelete a file that has been deleted. External
unformat
Unformat a hard disk drive. External
unlock
Unlock a disk drive. Internal
ver
Display the version information. Internal
verify
Enables or disables the feature to determine if files have been written properly. Internal
vol
Displays the volume information about the designated drive. Internal
xcopy
Copy multiple files, directories, and/or drives from one location to another. External
“LDAP(Lightweight Directory Access Protocol)”
What is LDAP?
LDAP, Lightweight Directory Access Protocol, is an Internet protocol that email and other programs use to look up information from a server.
Every email program has a personal address book, but how do you look up an address for someone who's never sent you email? How can an organization keep one centralized up-to-date phone book that everybody has access to?
That question led software companies such as Microsoft, IBM, Lotus, and Netscape to support a standard called LDAP. "LDAP-aware" client programs can ask LDAP servers to look up entries in a wide variety of ways. LDAP servers index all the data in their entries, and "filters" may be used to select just the person or group you want, and return just the information you want. For example, here's an LDAP search translated into plain English: "Search for all people located in Chicago whose name contains "Fred" that have an email address. Please return their full name, email, title, and description."
LDAP is not limited to contact information, or even information about people. LDAP is used to look up encryption certificates, pointers to printers and other services on a network, and provide "single signon" where one password for a user is shared between many services. LDAP is appropriate for any kind of directory-like information, where fast lookups and less-frequent updates are the norm.
As a protocol, LDAP does not define how programs work on either the client or server side. It defines the "language" used for client programs to talk to servers (and servers to servers, too). On the client side, a client may be an email program, a printer browser, or an address book. The server may speak only LDAP, or have other methods of sending and receiving data—LDAP may just be an add-on method.
If you have an email program (as opposed to web-based email), it probably supports LDAP. Most LDAP clients can only read from a server. Search abilities of clients (as seen in email programs) vary widely. A few can write or update information, but LDAP does not include security or encryption, so updates usually require additional protection such as an encrypted SSL connection to the LDAP server.
LDAP also defines: Permissions, set by the administrator to allow only certain people to access the LDAP database, and optionally keep certain data private. Schema: a way to describe the format and attributes of data in the server. For example: a schema entered in an LDAP server might define a "groovyPerson" entry type, which has attributes of "instantMessageAddress", and "coffeeRoastPreference". The normal attributes of name, email address, etc., would be inherited from one of the standard schemas, which are rooted in X.500 (see below).
LDAP was designed at the University of Michigan to adapt a complex enterprise directory system (called X.500) to the modern Internet. X.500 is too complex to support on desktops and over the Internet, so LDAP was created to provide this service "for the rest of us."
LDAP servers exist at three levels: There are big public servers, large organizational servers at universities and corporations, and smaller LDAP servers for workgroups. Most public servers from around year 2000 have disappeared, although directory.verisign.com exists for looking up X.509 certificates. The idea of publicly listing your email address for the world to see, of course, has been crushed by spam.
While LDAP didn't bring us the worldwide email address book, it continues to be a popular standard for communicating record-based, directory-like data between programs.
LDAP, Lightweight Directory Access Protocol, is an Internet protocol that email and other programs use to look up information from a server.
Every email program has a personal address book, but how do you look up an address for someone who's never sent you email? How can an organization keep one centralized up-to-date phone book that everybody has access to?
That question led software companies such as Microsoft, IBM, Lotus, and Netscape to support a standard called LDAP. "LDAP-aware" client programs can ask LDAP servers to look up entries in a wide variety of ways. LDAP servers index all the data in their entries, and "filters" may be used to select just the person or group you want, and return just the information you want. For example, here's an LDAP search translated into plain English: "Search for all people located in Chicago whose name contains "Fred" that have an email address. Please return their full name, email, title, and description."
LDAP is not limited to contact information, or even information about people. LDAP is used to look up encryption certificates, pointers to printers and other services on a network, and provide "single signon" where one password for a user is shared between many services. LDAP is appropriate for any kind of directory-like information, where fast lookups and less-frequent updates are the norm.
As a protocol, LDAP does not define how programs work on either the client or server side. It defines the "language" used for client programs to talk to servers (and servers to servers, too). On the client side, a client may be an email program, a printer browser, or an address book. The server may speak only LDAP, or have other methods of sending and receiving data—LDAP may just be an add-on method.
If you have an email program (as opposed to web-based email), it probably supports LDAP. Most LDAP clients can only read from a server. Search abilities of clients (as seen in email programs) vary widely. A few can write or update information, but LDAP does not include security or encryption, so updates usually require additional protection such as an encrypted SSL connection to the LDAP server.
LDAP also defines: Permissions, set by the administrator to allow only certain people to access the LDAP database, and optionally keep certain data private. Schema: a way to describe the format and attributes of data in the server. For example: a schema entered in an LDAP server might define a "groovyPerson" entry type, which has attributes of "instantMessageAddress", and "coffeeRoastPreference". The normal attributes of name, email address, etc., would be inherited from one of the standard schemas, which are rooted in X.500 (see below).
LDAP was designed at the University of Michigan to adapt a complex enterprise directory system (called X.500) to the modern Internet. X.500 is too complex to support on desktops and over the Internet, so LDAP was created to provide this service "for the rest of us."
LDAP servers exist at three levels: There are big public servers, large organizational servers at universities and corporations, and smaller LDAP servers for workgroups. Most public servers from around year 2000 have disappeared, although directory.verisign.com exists for looking up X.509 certificates. The idea of publicly listing your email address for the world to see, of course, has been crushed by spam.
While LDAP didn't bring us the worldwide email address book, it continues to be a popular standard for communicating record-based, directory-like data between programs.
OLAP(Online Analytical Processing)”
Introduction to OLAP
OLAP (or Online Analytical Processing) has been growing in popularity due to the increase in data volumes and the recognition of the business value of analytics. Until the mid-nineties, performing OLAP analysis was an extremely costly process mainly restricted to larger organizations.
The major OLAP vendor are Hyperion, Cognos, Business Objects, MicroStrategy. The cost per seat were in the range of $1500 to $5000 per annum. The setting up of the environment to perform OLAP analysis would also require substantial investments in time and monetary resources.
This has changed as the major database vendor have started to incorporate OLAP modules within their database offering - Microsoft SQL Server 2000 with Analysis Services, Oracle with Express and Darwin, and IBM with DB2.
What is OLAP?
OLAP allows business users to slice and dice data at will. Normally data in an organization is distributed in multiple data sources and are incompatible with each other. A retail example: Point-of-sales data and sales made via call-center or the Web are stored in different location and formats. It would a time consuming process for an executive to obtain OLAP reports such as - What are the most popular products purchased by customers between the ages 15 to 30?
Part of the OLAP implementation process involves extracting data from the various data repositories and making them compatible. Making data compatible involves ensuring that the meaning of the data in one repository matches all other repositories. An example of incompatible data: Customer ages can be stored as birth date for purchases made over the web and stored as age categories (i.e. between 15 and 30) for in store sales.
It is not always necessary to create a data warehouse for OLAP analysis. Data stored by operational systems, such as point-of-sales, are in types of databases called OLTPs. OLTP, Online Transaction Process, databases do not have any difference from a structural perspective from any other databases. The main difference, and only, difference is the way in which data is stored.
Examples of OLTPs can include ERP, CRM, SCM, Point-of-Sale applications, Call Center.
OLTPs are designed for optimal transaction speed. When a consumer makes a purchase online, they expect the transactions to occur instantaneously. With a database design, call data modeling, optimized for transactions the record 'Consumer name, Address, Telephone, Order Number, Order Name, Price, Payment Method' is created quickly on the database and the results can be recalled by managers equally quickly if needed.
Figure 1. Data Model for OLTP
Data are not typically stored for an extended period on OLTPs for storage cost and transaction speed reasons.
OLAPs have a different mandate from OLTPs. OLAPs are designed to give an overview analysis of what happened. Hence the data storage (i.e. data modeling) has to be set up differently. The most common method is called the star design.
Figure 2. Star Data Model for OLAP
The central table in an OLAP start data model is called the fact table. The surrounding tables are called the dimensions. Using the above data model, it is possible to build reports that answer questions such as:
• The supervisor that gave the most discounts.
• The quantity shipped on a particular date, month, year or quarter.
• In which zip code did product A sell the most.
To obtain answers, such as the ones above, from a data model OLAP cubes are created. OLAP cubes are not strictly cuboids - it is the name given to the process of linking data from the different dimensions. The cubes can be developed along business units such as sales or marketing. Or a giant cube can be formed with all the dimensions.
Figure 3. OLAP Cube with Time, Customer and Product Dimensions
OLAP can be a valuable and rewarding business tool. Aside from producing reports, OLAP analysis can aid an organization evaluate balanced scorecard targets.
Figure 4. Steps in the OLAP Creation Process
OLAP (or Online Analytical Processing) has been growing in popularity due to the increase in data volumes and the recognition of the business value of analytics. Until the mid-nineties, performing OLAP analysis was an extremely costly process mainly restricted to larger organizations.
The major OLAP vendor are Hyperion, Cognos, Business Objects, MicroStrategy. The cost per seat were in the range of $1500 to $5000 per annum. The setting up of the environment to perform OLAP analysis would also require substantial investments in time and monetary resources.
This has changed as the major database vendor have started to incorporate OLAP modules within their database offering - Microsoft SQL Server 2000 with Analysis Services, Oracle with Express and Darwin, and IBM with DB2.
What is OLAP?
OLAP allows business users to slice and dice data at will. Normally data in an organization is distributed in multiple data sources and are incompatible with each other. A retail example: Point-of-sales data and sales made via call-center or the Web are stored in different location and formats. It would a time consuming process for an executive to obtain OLAP reports such as - What are the most popular products purchased by customers between the ages 15 to 30?
Part of the OLAP implementation process involves extracting data from the various data repositories and making them compatible. Making data compatible involves ensuring that the meaning of the data in one repository matches all other repositories. An example of incompatible data: Customer ages can be stored as birth date for purchases made over the web and stored as age categories (i.e. between 15 and 30) for in store sales.
It is not always necessary to create a data warehouse for OLAP analysis. Data stored by operational systems, such as point-of-sales, are in types of databases called OLTPs. OLTP, Online Transaction Process, databases do not have any difference from a structural perspective from any other databases. The main difference, and only, difference is the way in which data is stored.
Examples of OLTPs can include ERP, CRM, SCM, Point-of-Sale applications, Call Center.
OLTPs are designed for optimal transaction speed. When a consumer makes a purchase online, they expect the transactions to occur instantaneously. With a database design, call data modeling, optimized for transactions the record 'Consumer name, Address, Telephone, Order Number, Order Name, Price, Payment Method' is created quickly on the database and the results can be recalled by managers equally quickly if needed.
Figure 1. Data Model for OLTP
Data are not typically stored for an extended period on OLTPs for storage cost and transaction speed reasons.
OLAPs have a different mandate from OLTPs. OLAPs are designed to give an overview analysis of what happened. Hence the data storage (i.e. data modeling) has to be set up differently. The most common method is called the star design.
Figure 2. Star Data Model for OLAP
The central table in an OLAP start data model is called the fact table. The surrounding tables are called the dimensions. Using the above data model, it is possible to build reports that answer questions such as:
• The supervisor that gave the most discounts.
• The quantity shipped on a particular date, month, year or quarter.
• In which zip code did product A sell the most.
To obtain answers, such as the ones above, from a data model OLAP cubes are created. OLAP cubes are not strictly cuboids - it is the name given to the process of linking data from the different dimensions. The cubes can be developed along business units such as sales or marketing. Or a giant cube can be formed with all the dimensions.
Figure 3. OLAP Cube with Time, Customer and Product Dimensions
OLAP can be a valuable and rewarding business tool. Aside from producing reports, OLAP analysis can aid an organization evaluate balanced scorecard targets.
Figure 4. Steps in the OLAP Creation Process
“Thick & Thin Client”
Thick Client
Thick clients, also called heavy clients, are full-featured computers that are connected to a network. Unlike thin clients, which lack hard drives and other features, thick clients are functional whether they are connected to a network or not.
While a thick client is fully functional without a network connection, it is only a "client" when it is connected to a server. The server may provide the thick client with programs and files that are not stored on the local machine's hard drive. It is not uncommon for workplaces to provide thick clients to their employees. This enables them to access files on a local server or use the computers offline. When a thick client is disconnected from the network, it is often referred to as a workstation.
Thin Clients
Typically, thin Clients are low powered computers that (strictly speaking) do not have a hard diskdrive. Since there is no hard disk drive, there is also no operating system. And since these are low powered systems, all processing is done on the server instead of the thin client itself. Certain types of thin clients (running on embedded XP or embedded Linux) may have full fledged OS capabilities complete with installed applications such as Microsoft or Open Office and browsers such as Internet Explorer and/or Mozilla Firefox.
Comparison between Thin Client and Thick Client
Thick vs. Thin - A Quick Comparison
Thin Clients Thick Clients
- Easy to deploy as they require no extra or specialized software installation
- Needs to validate with the server after data capture
- If the server goes down, data collection is halted as the client needs constant communication with the server
- Cannot be interfaced with other equipment (in plants or factory settings for example)
- Clients run only and exactly as specified by the server
- More downtime
-Portability in that all applications are on the server so any workstation can access
- Opportunity to use older, outdated PCs as clients
- Reduced security threat - More expensive to deploy and more work for IT to deploy
- Data verified by client not server (immediate validation)
- Robust technology provides better uptime
- Only needs intermittent communication with server
- More expensive to deploy and more work for IT to deploy
- Require more resources but less servers
- Can store local files and applications
- Reduced server demands
- Increased security issues
Thick clients, also called heavy clients, are full-featured computers that are connected to a network. Unlike thin clients, which lack hard drives and other features, thick clients are functional whether they are connected to a network or not.
While a thick client is fully functional without a network connection, it is only a "client" when it is connected to a server. The server may provide the thick client with programs and files that are not stored on the local machine's hard drive. It is not uncommon for workplaces to provide thick clients to their employees. This enables them to access files on a local server or use the computers offline. When a thick client is disconnected from the network, it is often referred to as a workstation.
Thin Clients
Typically, thin Clients are low powered computers that (strictly speaking) do not have a hard diskdrive. Since there is no hard disk drive, there is also no operating system. And since these are low powered systems, all processing is done on the server instead of the thin client itself. Certain types of thin clients (running on embedded XP or embedded Linux) may have full fledged OS capabilities complete with installed applications such as Microsoft or Open Office and browsers such as Internet Explorer and/or Mozilla Firefox.
Comparison between Thin Client and Thick Client
Thick vs. Thin - A Quick Comparison
Thin Clients Thick Clients
- Easy to deploy as they require no extra or specialized software installation
- Needs to validate with the server after data capture
- If the server goes down, data collection is halted as the client needs constant communication with the server
- Cannot be interfaced with other equipment (in plants or factory settings for example)
- Clients run only and exactly as specified by the server
- More downtime
-Portability in that all applications are on the server so any workstation can access
- Opportunity to use older, outdated PCs as clients
- Reduced security threat - More expensive to deploy and more work for IT to deploy
- Data verified by client not server (immediate validation)
- Robust technology provides better uptime
- Only needs intermittent communication with server
- More expensive to deploy and more work for IT to deploy
- Require more resources but less servers
- Can store local files and applications
- Reduced server demands
- Increased security issues
Domain knowledge
How Domain knowledge is Important for testers?
“Looking at the current scenario from the industry it is seen that the testers are expected to have both technical testing skills as well either need to be from the domain background or have gathered domain knowledge mainly for BFSI is commonly seen.
I would like to know why and when is this domain knowledge imparted to the tester during the testing cycle?”
First of all let us see three dimensional testing career mentioned here. There are three categories of skill that need to be judged before hiring any software tester.
What are those three skill categories?
1) Testing skill
2) Domain knowledge
3) Technical expertise.
No doubt that any tester should have the basic testing skills like Manual testing and Automation testing. Tester having the common sense can even find most of the obvious bugs in the software. Then would you say that this much testing is sufficient? Would you release the product on the basis of this much testing done? Certainly not. You will certainly have a product look by the domain expert before the product goes into the market.
While testing any application you should think like a end-user. But every human being has the limitations and one can’t be the expert in all of the three dimensions mentioned above. So you can’t assure that you can think 100% like how the end-user going to use your application. User who is going to use your application may be having a good understanding of the domain he is working on. You need to balance all these skill activities so that all product aspects will get addressed.
Nowadays you can see the professional being hired in different companies are more domain experts than having technical skills. Current software industry is also seeing a good trend that many professional developers and domain experts are moving into software testing.
We can observe one more reason why domain experts are most wanted! When you hire fresh engineers who are just out of college you cannot expect them to compete with the experienced professionals. Why? Because experienced professional certainly have the advantage of domain and testing experience and they have better understandings of different issues and can deliver the application better and faster.
Here are some of the examples where you can see the distinct edge of domain knowledge:
1) Mobile application testing.
2) Wireless application testing
3) VoIP applications
4) Protocol testing
5) Banking applications
6) Network testing
How will you test such applications without knowledge of specific domain? Are you going to test the BFSI applications (Banking, Financial Services and Insurance) just for UI or functionality or security or load or stress? You should know what are the user requirements in banking, working procedures, commerce background, exposure to brokerage etc and should test application accordingly, then only you can say that your testing is enough – Here comes the need of subject-matter experts.
Let’s take example of a project: Search engine application. Where we need to know the basic of search engine terminologies and concepts. Many times we see some other team tester’s asking me questions like what is ‘publishers’ and ‘advertisers’, what is the difference and what they do? Do you think they can test the application based on current online advertising and SEO? Certainly not. Unless and until they get well familiar with these terminologies and functionalities.
When we know the functional domain better we can better write and execute more test cases and can effectively simulate the end user actions which is distinctly a big advantage.
Here is the big list of the required testing knowledge:
• Testing skill
• Bug hunting skill
• Technical skill
• Domain knowledge
• Communication skill
• Automation skill
• Some programming skill
• Quick grasping
• Ability to Work under pressure …
That is going to be a huge list. So you will certainly say, do I need to have these many skills? Its’ depends on you. You can stick to one skill or can be expert in one skill and have good understanding of other skills or balanced approach of all the skills. This is the competitive market and you should definitely take advantage of it. Make sure to be expert in at least one domain before making any move.
What if you don’t have enough domain knowledge?
You will be posted on any project and company can assign any work to you. Then what if you don’t have enough domain knowledge of that project? You need to quickly grasp as many concepts as you can. Try to understand the product as if you are the customer and what customer will do with application. Visit the customer site if possible know how they work with the product, Read online resources about the domain you want to test the application, participate in events addressing on such domain, meet the domain experts. Or either company will provide all this in-house training before assigning any domain specific task to testers.
There is no specific stage where you need this domain knowledge. You need to apply your domain knowledge in each and every software testing life cycle.
“Looking at the current scenario from the industry it is seen that the testers are expected to have both technical testing skills as well either need to be from the domain background or have gathered domain knowledge mainly for BFSI is commonly seen.
I would like to know why and when is this domain knowledge imparted to the tester during the testing cycle?”
First of all let us see three dimensional testing career mentioned here. There are three categories of skill that need to be judged before hiring any software tester.
What are those three skill categories?
1) Testing skill
2) Domain knowledge
3) Technical expertise.
No doubt that any tester should have the basic testing skills like Manual testing and Automation testing. Tester having the common sense can even find most of the obvious bugs in the software. Then would you say that this much testing is sufficient? Would you release the product on the basis of this much testing done? Certainly not. You will certainly have a product look by the domain expert before the product goes into the market.
While testing any application you should think like a end-user. But every human being has the limitations and one can’t be the expert in all of the three dimensions mentioned above. So you can’t assure that you can think 100% like how the end-user going to use your application. User who is going to use your application may be having a good understanding of the domain he is working on. You need to balance all these skill activities so that all product aspects will get addressed.
Nowadays you can see the professional being hired in different companies are more domain experts than having technical skills. Current software industry is also seeing a good trend that many professional developers and domain experts are moving into software testing.
We can observe one more reason why domain experts are most wanted! When you hire fresh engineers who are just out of college you cannot expect them to compete with the experienced professionals. Why? Because experienced professional certainly have the advantage of domain and testing experience and they have better understandings of different issues and can deliver the application better and faster.
Here are some of the examples where you can see the distinct edge of domain knowledge:
1) Mobile application testing.
2) Wireless application testing
3) VoIP applications
4) Protocol testing
5) Banking applications
6) Network testing
How will you test such applications without knowledge of specific domain? Are you going to test the BFSI applications (Banking, Financial Services and Insurance) just for UI or functionality or security or load or stress? You should know what are the user requirements in banking, working procedures, commerce background, exposure to brokerage etc and should test application accordingly, then only you can say that your testing is enough – Here comes the need of subject-matter experts.
Let’s take example of a project: Search engine application. Where we need to know the basic of search engine terminologies and concepts. Many times we see some other team tester’s asking me questions like what is ‘publishers’ and ‘advertisers’, what is the difference and what they do? Do you think they can test the application based on current online advertising and SEO? Certainly not. Unless and until they get well familiar with these terminologies and functionalities.
When we know the functional domain better we can better write and execute more test cases and can effectively simulate the end user actions which is distinctly a big advantage.
Here is the big list of the required testing knowledge:
• Testing skill
• Bug hunting skill
• Technical skill
• Domain knowledge
• Communication skill
• Automation skill
• Some programming skill
• Quick grasping
• Ability to Work under pressure …
That is going to be a huge list. So you will certainly say, do I need to have these many skills? Its’ depends on you. You can stick to one skill or can be expert in one skill and have good understanding of other skills or balanced approach of all the skills. This is the competitive market and you should definitely take advantage of it. Make sure to be expert in at least one domain before making any move.
What if you don’t have enough domain knowledge?
You will be posted on any project and company can assign any work to you. Then what if you don’t have enough domain knowledge of that project? You need to quickly grasp as many concepts as you can. Try to understand the product as if you are the customer and what customer will do with application. Visit the customer site if possible know how they work with the product, Read online resources about the domain you want to test the application, participate in events addressing on such domain, meet the domain experts. Or either company will provide all this in-house training before assigning any domain specific task to testers.
There is no specific stage where you need this domain knowledge. You need to apply your domain knowledge in each and every software testing life cycle.
“Pair Programming”
Pair programming is an agile software development technique in which two programmers work together at one work station.
• One types incode while the other reviews each line of code as it is typed in. The person typing is called the driver.
• The person reviewing the code is called the observer (or navigator) The two programmers switch roles frequently.
• While reviewing, the observer also considers the strategic direction of the work, coming up with ideas for improvements and likely future problems to address. This frees the driver to focus all of his or her attention on the "tactical" aspects of completing the current task, using the observer as a safety net and guide.
Key Points:
• Programmers working in pairs produce shorter programs, with better designs and fewer bugs, than programmers working alone.
• Studies have found reduction in defect rates of 15% to 50%, varying depending on programmer experience and task complexity.[
• Pairs typically consider more design alternatives than programmers working solo, and arrive at simpler, more-maintainable designs, as well as catch design defects early.
• Pairs usually complete work faster than one programmer assigned to the same task Pairs often find that seemingly "impossible" problems become easy or even quick, or at least possible, to solve when they work together.
Types of Pair Programming
Emote pair programming, also known as virtual pair programming or distributed pair programming, is pair programming where the two programmers are in different locations[19], working via a collaborative real-time editor, shared desktop, or a remote pair programming IDE plugin. Remote pairing introduces difficulties not present in face-to-face pairing, such as extra delays for coordination, depending more on "heavyweight" task-tracking tools instead of "lightweight" ones like index cards, and loss of non-verbal communication resulting in confusion and conflicts over such things as who "has the keyboard".[20]
Numerous tools, such as Eclipse plug-ins are available to support remote pairing. Some teams have tried VNC and RealVNC with each programmer using their own computer.[21][22][23]Others use the multi-display mode (-x) of the text-based GNU screen. Apple Inc. OSX has a built-in Screen Sharing application.
Ping pong pair programming
In ping pong pair programming, the observer writes a failing unit test, the driver modifies the code to pass the test, the observer writes a new unit test, and so on. This loop continues as long as the observer is able to write failing unit tests.[2
Why pair
• Higher quality code
• Faster cycle time
• Enhanced trust/teamwork
• Knowledge transfer
• Enhanced learning
Disadvantages
• Unavailability of partners
• Scheduling
• Experts/Skill Imbalances
• Concentration
• Disagreements
• Overconfidence
• Rushing
• Not for everyone
• One types incode while the other reviews each line of code as it is typed in. The person typing is called the driver.
• The person reviewing the code is called the observer (or navigator) The two programmers switch roles frequently.
• While reviewing, the observer also considers the strategic direction of the work, coming up with ideas for improvements and likely future problems to address. This frees the driver to focus all of his or her attention on the "tactical" aspects of completing the current task, using the observer as a safety net and guide.
Key Points:
• Programmers working in pairs produce shorter programs, with better designs and fewer bugs, than programmers working alone.
• Studies have found reduction in defect rates of 15% to 50%, varying depending on programmer experience and task complexity.[
• Pairs typically consider more design alternatives than programmers working solo, and arrive at simpler, more-maintainable designs, as well as catch design defects early.
• Pairs usually complete work faster than one programmer assigned to the same task Pairs often find that seemingly "impossible" problems become easy or even quick, or at least possible, to solve when they work together.
Types of Pair Programming
Emote pair programming, also known as virtual pair programming or distributed pair programming, is pair programming where the two programmers are in different locations[19], working via a collaborative real-time editor, shared desktop, or a remote pair programming IDE plugin. Remote pairing introduces difficulties not present in face-to-face pairing, such as extra delays for coordination, depending more on "heavyweight" task-tracking tools instead of "lightweight" ones like index cards, and loss of non-verbal communication resulting in confusion and conflicts over such things as who "has the keyboard".[20]
Numerous tools, such as Eclipse plug-ins are available to support remote pairing. Some teams have tried VNC and RealVNC with each programmer using their own computer.[21][22][23]Others use the multi-display mode (-x) of the text-based GNU screen. Apple Inc. OSX has a built-in Screen Sharing application.
Ping pong pair programming
In ping pong pair programming, the observer writes a failing unit test, the driver modifies the code to pass the test, the observer writes a new unit test, and so on. This loop continues as long as the observer is able to write failing unit tests.[2
Why pair
• Higher quality code
• Faster cycle time
• Enhanced trust/teamwork
• Knowledge transfer
• Enhanced learning
Disadvantages
• Unavailability of partners
• Scheduling
• Experts/Skill Imbalances
• Concentration
• Disagreements
• Overconfidence
• Rushing
• Not for everyone
SQL Replication
SQL replication is a process for sharing/distributing data between different databases and synchronizing between those databases. You can use SQL replication to distribute data to a variety of network points like other database servers, mobile users, etc. You can perform the replication over many different kinds of networks and this won’t affect the end result.
In every SQL replication there are 2 main players called Publisher and Subscriber. The Publisher is the replication end point that supplies the data and the replication Subscriber is the replication end point that uses the data from the Publisher. Depending on the replication architecture a replication can have one or more Publishers and of course any replication will have one or more Subscribers.
MS SQL Server offers several main replication types. The Transactional replication is usually used when there’s need to integrate data from several different locations, offloading batch processing, and in data warehousing scenarios.
Another replication type is the Snapshot replication. The Snapshot replication is commonly performed when a full database refresh is appropriate or as a starting point for transactional or merge replications.
The third important SQL replication type is the Merge replication. The Merge replication is used whenever there is a possibility for a data conflicts across distributed server applications.
SQL Aggregate Functions
SQL aggregate functions are used to sum, count, get the average, get the minimum and get the maximum values from a column or from a sub-set of column values.
To count the rows in the Weather table we can use the SQL COUNT aggregate function:
SELECT COUNT(*)
FROM Weather
To get the average temperature for the Weather table use the AVG SQL aggregate function:
SELECT AVG(AverageTemperature)
FROM Weather
If you want to get the average temperature for a particular city you can do it this way:
SELECT AVG(AverageTemperature)
FROM Weather
WHERE City = 'New York'
To get the minimum value from a numeric table column, use the SQL MIN aggregate function:
SELECT MIN(AverageTemperature)
FROM Weather
To get the maximum value from a numeric table column, use the SQL MAX aggregate function:
SELECT MAX(AverageTemperature)
FROM Weather
Finally to sum up the values in the column use the SQL SUM aggregate function:
SELECT SUM(AverageTemperature)
FROM Weather
You can specify search criteria with the SQL WHERE clause for any of the above SQL aggregate functions.
In every SQL replication there are 2 main players called Publisher and Subscriber. The Publisher is the replication end point that supplies the data and the replication Subscriber is the replication end point that uses the data from the Publisher. Depending on the replication architecture a replication can have one or more Publishers and of course any replication will have one or more Subscribers.
MS SQL Server offers several main replication types. The Transactional replication is usually used when there’s need to integrate data from several different locations, offloading batch processing, and in data warehousing scenarios.
Another replication type is the Snapshot replication. The Snapshot replication is commonly performed when a full database refresh is appropriate or as a starting point for transactional or merge replications.
The third important SQL replication type is the Merge replication. The Merge replication is used whenever there is a possibility for a data conflicts across distributed server applications.
SQL Aggregate Functions
SQL aggregate functions are used to sum, count, get the average, get the minimum and get the maximum values from a column or from a sub-set of column values.
To count the rows in the Weather table we can use the SQL COUNT aggregate function:
SELECT COUNT(*)
FROM Weather
To get the average temperature for the Weather table use the AVG SQL aggregate function:
SELECT AVG(AverageTemperature)
FROM Weather
If you want to get the average temperature for a particular city you can do it this way:
SELECT AVG(AverageTemperature)
FROM Weather
WHERE City = 'New York'
To get the minimum value from a numeric table column, use the SQL MIN aggregate function:
SELECT MIN(AverageTemperature)
FROM Weather
To get the maximum value from a numeric table column, use the SQL MAX aggregate function:
SELECT MAX(AverageTemperature)
FROM Weather
Finally to sum up the values in the column use the SQL SUM aggregate function:
SELECT SUM(AverageTemperature)
FROM Weather
You can specify search criteria with the SQL WHERE clause for any of the above SQL aggregate functions.
SQL Replication
SQL replication is a process for sharing/distributing data between different databases and synchronizing between those databases. You can use SQL replication to distribute data to a variety of network points like other database servers, mobile users, etc. You can perform the replication over many different kinds of networks and this won’t affect the end result.
In every SQL replication there are 2 main players called Publisher and Subscriber. The Publisher is the replication end point that supplies the data and the replication Subscriber is the replication end point that uses the data from the Publisher. Depending on the replication architecture a replication can have one or more Publishers and of course any replication will have one or more Subscribers.
MS SQL Server offers several main replication types. The Transactional replication is usually used when there’s need to integrate data from several different locations, offloading batch processing, and in data warehousing scenarios.
Another replication type is the Snapshot replication. The Snapshot replication is commonly performed when a full database refresh is appropriate or as a starting point for transactional or merge replications.
The third important SQL replication type is the Merge replication. The Merge replication is used whenever there is a possibility for a data conflicts across distributed server applications.
SQL Aggregate Functions
SQL aggregate functions are used to sum, count, get the average, get the minimum and get the maximum values from a column or from a sub-set of column values.
To count the rows in the Weather table we can use the SQL COUNT aggregate function:
SELECT COUNT(*)
FROM Weather
To get the average temperature for the Weather table use the AVG SQL aggregate function:
SELECT AVG(AverageTemperature)
FROM Weather
If you want to get the average temperature for a particular city you can do it this way:
SELECT AVG(AverageTemperature)
FROM Weather
WHERE City = 'New York'
To get the minimum value from a numeric table column, use the SQL MIN aggregate function:
SELECT MIN(AverageTemperature)
FROM Weather
To get the maximum value from a numeric table column, use the SQL MAX aggregate function:
SELECT MAX(AverageTemperature)
FROM Weather
Finally to sum up the values in the column use the SQL SUM aggregate function:
SELECT SUM(AverageTemperature)
FROM Weather
You can specify search criteria with the SQL WHERE clause for any of the above SQL aggregate functions.
In every SQL replication there are 2 main players called Publisher and Subscriber. The Publisher is the replication end point that supplies the data and the replication Subscriber is the replication end point that uses the data from the Publisher. Depending on the replication architecture a replication can have one or more Publishers and of course any replication will have one or more Subscribers.
MS SQL Server offers several main replication types. The Transactional replication is usually used when there’s need to integrate data from several different locations, offloading batch processing, and in data warehousing scenarios.
Another replication type is the Snapshot replication. The Snapshot replication is commonly performed when a full database refresh is appropriate or as a starting point for transactional or merge replications.
The third important SQL replication type is the Merge replication. The Merge replication is used whenever there is a possibility for a data conflicts across distributed server applications.
SQL Aggregate Functions
SQL aggregate functions are used to sum, count, get the average, get the minimum and get the maximum values from a column or from a sub-set of column values.
To count the rows in the Weather table we can use the SQL COUNT aggregate function:
SELECT COUNT(*)
FROM Weather
To get the average temperature for the Weather table use the AVG SQL aggregate function:
SELECT AVG(AverageTemperature)
FROM Weather
If you want to get the average temperature for a particular city you can do it this way:
SELECT AVG(AverageTemperature)
FROM Weather
WHERE City = 'New York'
To get the minimum value from a numeric table column, use the SQL MIN aggregate function:
SELECT MIN(AverageTemperature)
FROM Weather
To get the maximum value from a numeric table column, use the SQL MAX aggregate function:
SELECT MAX(AverageTemperature)
FROM Weather
Finally to sum up the values in the column use the SQL SUM aggregate function:
SELECT SUM(AverageTemperature)
FROM Weather
You can specify search criteria with the SQL WHERE clause for any of the above SQL aggregate functions.
“Decision Tables”
Decision tables are a precise yet compact way to model complicated logic
Decision tables, like flowcharts and if-then-else and switch-case statements, associate conditions with actions to perform, but in many cases do so in a more elegant way.
The decision table is typically divided into four quadrants, as shown below.
The four quadrants
Conditions Condition alternatives
Actions Action entries
Each decision corresponds to a variable, relation or predicate whose possible values are listed among the condition alternatives.
A decision table lists causes and effects in a matrix. Each column represents a unique combination.
Purpose is to structure logic
Cause = condition
Effect = action = expected results
Steps to Create a decision table
1. List all causes in the decision table
2. Calculate the number of possible combinations
3. Fill columns with all possible combinations
4. Reduce test combinations
5. Check covered combinations
6. Add effects to the table
Step 1: List all causes
Hints:
Write down the values the cause/condition can assume
Cluster related causes
Put the most dominating cause first
Put multi valued causes last
Step 2: Calculate combinations
If all causes are simply Y/N values:
2number of causes`
If 1 cause with 3 values and 3 with 2:
31 * 23 = 24
Or, use the Values column and multiply each value down the column, eg. 3*2*2*2=24
Step 3: Fill columns
Algorithm:
1. Determine Repeating Factor (RF): divide remaining combinations by the number of possible values for that cause
2. Write RF times the first value, then RF times the next etc. until row is full
3. Next row, go to 1.
Step 4: Reduce combinations
Find indifferent combinations – place a ‘-’
Join columns where columns are identical
Tip: ensure the effects are the same
Step 5: Check covered combinations
Checksum
For each column calculate the combinations it represents
A ‘-’ represents as many combinations as the cause has
Multiply for each ‘-’ down the column
Add up total and compare with step 2
Step 6: Add effects to table
Read column by column and determine the effects
One effect can occur in multiple test combinations
Decision tables, like flowcharts and if-then-else and switch-case statements, associate conditions with actions to perform, but in many cases do so in a more elegant way.
The decision table is typically divided into four quadrants, as shown below.
The four quadrants
Conditions Condition alternatives
Actions Action entries
Each decision corresponds to a variable, relation or predicate whose possible values are listed among the condition alternatives.
A decision table lists causes and effects in a matrix. Each column represents a unique combination.
Purpose is to structure logic
Cause = condition
Effect = action = expected results
Steps to Create a decision table
1. List all causes in the decision table
2. Calculate the number of possible combinations
3. Fill columns with all possible combinations
4. Reduce test combinations
5. Check covered combinations
6. Add effects to the table
Step 1: List all causes
Hints:
Write down the values the cause/condition can assume
Cluster related causes
Put the most dominating cause first
Put multi valued causes last
Step 2: Calculate combinations
If all causes are simply Y/N values:
2number of causes`
If 1 cause with 3 values and 3 with 2:
31 * 23 = 24
Or, use the Values column and multiply each value down the column, eg. 3*2*2*2=24
Step 3: Fill columns
Algorithm:
1. Determine Repeating Factor (RF): divide remaining combinations by the number of possible values for that cause
2. Write RF times the first value, then RF times the next etc. until row is full
3. Next row, go to 1.
Step 4: Reduce combinations
Find indifferent combinations – place a ‘-’
Join columns where columns are identical
Tip: ensure the effects are the same
Step 5: Check covered combinations
Checksum
For each column calculate the combinations it represents
A ‘-’ represents as many combinations as the cause has
Multiply for each ‘-’ down the column
Add up total and compare with step 2
Step 6: Add effects to table
Read column by column and determine the effects
One effect can occur in multiple test combinations
“ Cloud Computing”
Cloud Computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand, like the electricity grid.
Cloud computing is a paradigm shift following the shift from mainframe to client–server in the early 1980s. Details are abstracted from the users, who no longer have need for expertise in, or control over, the technology infrastructure "in the cloud" that supports them. Cloud computing describes a new supplement, consumption, and delivery model for IT services based on the Internet, and it typically involves over-the-Internet provision of dynamically scalable and often virtualized resources. It is a byproduct and consequence of the ease-of-access to remote computing sites provided by the Internet. The term "cloud" is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the telephone network, and later to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents. Typical cloud computing providers deliver common business applications online that are accessed from another Web service or software like a Web browser, while the software and data are stored on servers.
Most cloud computing infrastructures consist of services delivered through common centers and built on servers. Clouds often appear as single points of access for all consumers' computing needs. Commercial offerings are generally expected to meet quality of service (QoS) requirements of customers, and typically include SLAs. The major cloud service providers include Microsoft, Salesforce, Skytap, HP, IBM, Amazon and Google
Key features
• Agility improves with users' ability to rapidly and inexpensively re-provision technological infrastructure resources.
• Cost is claimed to be greatly reduced and capital expenditure is converted to operational expenditure. This ostensibly lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house).
• Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
• Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:
o Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
o Peak-load capacity increases (users need not engineer for highest possible load-levels)
o Utilization and efficiency improvements for systems that are often only 10–20% utilized.
• Reliability is improved if multiple redundant sites are used, which makes well designed cloud computing suitable for business continuity and disaster recovery.] Nonetheless, many major cloud computing services have suffered outages, and IT and business managers can at times do little when they are affected.
• Scalability via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface. One of the most important new methods for overcoming performance bottlenecks for a large class of applications is data parallel programming on a distributed data grid.
• Security could improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than under traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. Providers typically log accesses, but accessing the audit logs themselves can be difficult or impossible. Furthermore, the complexity of security is greatly increased when data is distributed over a wider area and / or number of devices.
• Maintenance cloud computing applications are easier to maintain, since they don't have to be installed on each user's computer. They are easier to support and to improve since the changes reach the clients instantly.
• Metering cloud computing resources usage should be measurable and should be metered per client and application on daily, weekly, monthly, and annual basis. This will enable clients on choosing the vendor cloud on cost and reliability (QoS).
Life before cloud computing
• Traditional business applications—like those from SAP, Microsoft, and Oracle—have always been too complicated and expensive. They need a data center with office space, power, cooling, bandwidth, networks, servers, and storage. A complicated software stack. And a team of experts to install, configure, and run them. They need development, testing, staging, production, and failover environments.
• When you multiply these headaches across dozens or hundreds of apps, it’s easy to see why the biggest companies with the best IT departments aren’t getting the apps they need. Small businesses don’t stand a chance.
Cloud-computing: a better way
• Cloud computing is a better way to run your business. Instead of running your apps yourself, they run on a shared data center. When you use any app that runs in the cloud, you just log in, customize it, and start using it. That’s the power of cloud computing.
• Businesses are running all kinds of apps in the cloud these days, like CRM, HR, accounting, and custom-built apps. Cloud-based apps can be up and running in a few days, which is unheard of with traditional business software. They cost less, because you don’t need to pay for all the people, products, and facilities to run them. And, it turns out they’re more scalable, more secure, and more reliable than most apps. Plus, upgrades are taken care of for you, so your apps get security and performance enhancements and new features—automatically.
• The way you pay for cloud-based apps is also different. Forget about buying servers and software. When your apps run in the cloud, you don’t buy anything. It’s all rolled up into a predictable monthly subscription, so you only pay for what you actually use.
• Finally, cloud apps don’t eat up your valuable IT resources, so your CFO will love it. This lets you focus on deploying more apps, new projects, and innovation.
The bottom line: Cloud computing is a simple idea, but it can have a huge impact on your business.
• A cloud can be private or public. A public cloud sells services to anyone on the Internet. (Currently, Amazon Web Services is the largest public cloud provider.) A private cloud is a proprietary network or a data center that supplies hosted services to a limited number of people. When a service provider uses public cloud resources to create their private cloud, the result is called a virtual private cloud. Private or public, the goal of cloud computing is to provide easy, scalable access to computing resources and IT services.
• Cloud computing comes into focus only when you think about what IT always needs: a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing software. Cloud computing encompasses any subscription based or pay-peruse service that, in real timeover the Internet, extends IT's existing capabilities.
Cloud computing is a paradigm shift following the shift from mainframe to client–server in the early 1980s. Details are abstracted from the users, who no longer have need for expertise in, or control over, the technology infrastructure "in the cloud" that supports them. Cloud computing describes a new supplement, consumption, and delivery model for IT services based on the Internet, and it typically involves over-the-Internet provision of dynamically scalable and often virtualized resources. It is a byproduct and consequence of the ease-of-access to remote computing sites provided by the Internet. The term "cloud" is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the telephone network, and later to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents. Typical cloud computing providers deliver common business applications online that are accessed from another Web service or software like a Web browser, while the software and data are stored on servers.
Most cloud computing infrastructures consist of services delivered through common centers and built on servers. Clouds often appear as single points of access for all consumers' computing needs. Commercial offerings are generally expected to meet quality of service (QoS) requirements of customers, and typically include SLAs. The major cloud service providers include Microsoft, Salesforce, Skytap, HP, IBM, Amazon and Google
Key features
• Agility improves with users' ability to rapidly and inexpensively re-provision technological infrastructure resources.
• Cost is claimed to be greatly reduced and capital expenditure is converted to operational expenditure. This ostensibly lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house).
• Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
• Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:
o Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
o Peak-load capacity increases (users need not engineer for highest possible load-levels)
o Utilization and efficiency improvements for systems that are often only 10–20% utilized.
• Reliability is improved if multiple redundant sites are used, which makes well designed cloud computing suitable for business continuity and disaster recovery.] Nonetheless, many major cloud computing services have suffered outages, and IT and business managers can at times do little when they are affected.
• Scalability via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface. One of the most important new methods for overcoming performance bottlenecks for a large class of applications is data parallel programming on a distributed data grid.
• Security could improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than under traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. Providers typically log accesses, but accessing the audit logs themselves can be difficult or impossible. Furthermore, the complexity of security is greatly increased when data is distributed over a wider area and / or number of devices.
• Maintenance cloud computing applications are easier to maintain, since they don't have to be installed on each user's computer. They are easier to support and to improve since the changes reach the clients instantly.
• Metering cloud computing resources usage should be measurable and should be metered per client and application on daily, weekly, monthly, and annual basis. This will enable clients on choosing the vendor cloud on cost and reliability (QoS).
Life before cloud computing
• Traditional business applications—like those from SAP, Microsoft, and Oracle—have always been too complicated and expensive. They need a data center with office space, power, cooling, bandwidth, networks, servers, and storage. A complicated software stack. And a team of experts to install, configure, and run them. They need development, testing, staging, production, and failover environments.
• When you multiply these headaches across dozens or hundreds of apps, it’s easy to see why the biggest companies with the best IT departments aren’t getting the apps they need. Small businesses don’t stand a chance.
Cloud-computing: a better way
• Cloud computing is a better way to run your business. Instead of running your apps yourself, they run on a shared data center. When you use any app that runs in the cloud, you just log in, customize it, and start using it. That’s the power of cloud computing.
• Businesses are running all kinds of apps in the cloud these days, like CRM, HR, accounting, and custom-built apps. Cloud-based apps can be up and running in a few days, which is unheard of with traditional business software. They cost less, because you don’t need to pay for all the people, products, and facilities to run them. And, it turns out they’re more scalable, more secure, and more reliable than most apps. Plus, upgrades are taken care of for you, so your apps get security and performance enhancements and new features—automatically.
• The way you pay for cloud-based apps is also different. Forget about buying servers and software. When your apps run in the cloud, you don’t buy anything. It’s all rolled up into a predictable monthly subscription, so you only pay for what you actually use.
• Finally, cloud apps don’t eat up your valuable IT resources, so your CFO will love it. This lets you focus on deploying more apps, new projects, and innovation.
The bottom line: Cloud computing is a simple idea, but it can have a huge impact on your business.
• A cloud can be private or public. A public cloud sells services to anyone on the Internet. (Currently, Amazon Web Services is the largest public cloud provider.) A private cloud is a proprietary network or a data center that supplies hosted services to a limited number of people. When a service provider uses public cloud resources to create their private cloud, the result is called a virtual private cloud. Private or public, the goal of cloud computing is to provide easy, scalable access to computing resources and IT services.
• Cloud computing comes into focus only when you think about what IT always needs: a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing software. Cloud computing encompasses any subscription based or pay-peruse service that, in real timeover the Internet, extends IT's existing capabilities.
“PERT(Program Evaluation and Review Technique )”
A PERT chart is a project management tool used to schedule, organize, and coordinate tasks within a project. PERT stands for Program Evaluation Review Technique, a methodology developed by the U.S. Navy in the 1950s to manage the Polaris submarine missile program. A similar methodology, the Critical Path Method (CPM) was developed for project management in the private sector at about the same time
In the diagram, for example, the tasks between nodes 1, 2, 4, 8, and 10 must be completed in sequence. These are called dependent or serial tasks. The tasks between nodes 1 and 2, and nodes 1 and 3 are not dependent on the completion of one to start the other and can be undertaken simultaneously. These tasks are called parallel or concurrent tasks. Tasks that must be completed in sequence but that don't require resources or completion time are considered to have event dependency. These are represented by dotted lines with arrows and are called dummy activities. For example, the dashed arrow linking nodes 6 and 9 indicates that the system files must be converted before the user test can take place, but that the resources and time required to prepare for the user test (writing the user manual and user training) are on another path. Numbers on the opposite sides of the vectors indicate the time allotted for the task.
STEPS IN USING PERT
1. Plan in advance the a action to be taken to produce
a desired result.
2. Predict/calculate the probable performance time required
for the activities.
3. Improve the plan, when we find that predicted
performance is not good enough.
4. Measure performance against the plan, after the plan
is set in motion.
5. Control progress by using information, and replan the
action as required.
6. Repeat the last two steps until the project is complete.
ADVANTAGES OF PERT
1. The net work process force definition of programme tasks and integration of planning.
2. The network highlights the relationships between activities and shows their significance to programme accomplishment.
3. Through the critical path approach, management attention is directed to those activities which are important from the stand point of timely completion of the programme.
4. Through PERT, schedule status information is integrated and effect on the overall programme is shown.
5. By analyzing slack areas, tradeoffs in resources (taking resources from one activity to another) it becomes possible as a means of improving schedules of costs.
In the diagram, for example, the tasks between nodes 1, 2, 4, 8, and 10 must be completed in sequence. These are called dependent or serial tasks. The tasks between nodes 1 and 2, and nodes 1 and 3 are not dependent on the completion of one to start the other and can be undertaken simultaneously. These tasks are called parallel or concurrent tasks. Tasks that must be completed in sequence but that don't require resources or completion time are considered to have event dependency. These are represented by dotted lines with arrows and are called dummy activities. For example, the dashed arrow linking nodes 6 and 9 indicates that the system files must be converted before the user test can take place, but that the resources and time required to prepare for the user test (writing the user manual and user training) are on another path. Numbers on the opposite sides of the vectors indicate the time allotted for the task.
STEPS IN USING PERT
1. Plan in advance the a action to be taken to produce
a desired result.
2. Predict/calculate the probable performance time required
for the activities.
3. Improve the plan, when we find that predicted
performance is not good enough.
4. Measure performance against the plan, after the plan
is set in motion.
5. Control progress by using information, and replan the
action as required.
6. Repeat the last two steps until the project is complete.
ADVANTAGES OF PERT
1. The net work process force definition of programme tasks and integration of planning.
2. The network highlights the relationships between activities and shows their significance to programme accomplishment.
3. Through the critical path approach, management attention is directed to those activities which are important from the stand point of timely completion of the programme.
4. Through PERT, schedule status information is integrated and effect on the overall programme is shown.
5. By analyzing slack areas, tradeoffs in resources (taking resources from one activity to another) it becomes possible as a means of improving schedules of costs.
“(CMMI) Capability Maturity Model Integration”
Process Areas Detail:
The CMMI contains 22 process areas indicating the aspects of product development that are to be covered by company processes.
Causal Analysis and Resolution (CAR)
A Support process area at Maturity Level 5
Purpose
The purpose of Causal Analysis and Resolution (CAR) is to identify causes of defects and other problems and take action to prevent them from occurring in the future.
Specific Practices by Goal
SG 1 Determine Causes of Defects
o SP 1.1 Select Defect Data for Analysis
o SP 1.2 Analyze Causes
SG 2 Address Causes of Defects
o SP 2.1 Implement the Action Proposals
o SP 2.2 Evaluate the Effect of Changes
o SP 2.3 Record Data
Configuration Management (CM)
A Support process area at Maturity Level 2
Purpose
The purpose of Configuration Management (CM) is to establish and maintain the integrity of work products using configuration identification, configuration control, configuration status accounting, and configuration audits.
Specific Practices by Goal
SG 1 Establish Baselines
o SP 1.1 Identify Configuration Items
o SP 1.2 Establish a Configuration Management System
o SP 1.3 Create or Release Baselines
SG 2 Track and Control Changes
o SP 2.1 Track Change Requests
o SP 2.2 Control Configuration Items
SG 3 Establish Integrity
o SP 3.1 Establish Configuration Management Records
SP 3.2 Perform Configuration Audits
Decision Analysis and Resolution (DAR)
A Support process area at Maturity Level 3
Purpose
The purpose of Decision Analysis and Resolution (DAR) is to analyze possible decisions using a formal evaluation process that evaluates identified alternatives against established criteria.
Specific Practices by Goal
SG 1 Evaluate Alternatives
o SP 1.1 Establish Guidelines for Decision Analysis
o SP 1.2 Establish Evaluation Criteria
o SP 1.3 Identify Alternative Solutions
o SP 1.4 Select Evaluation Methods
o SP 1.5 Evaluate Alternatives
o SP 1.6 Select Solutions
Integrated Project Management +IPPD (IPM)
A Project Management process area at Maturity Level 3
Purpose
The purpose of Integrated Project Management +IPPD (IPM) is to establish and manage the project and the involvement of the relevant stakeholders according to an integrated and defined process that is tailored from the organization's set of standard processes.
Specific Practices by Goal
SG 1 Use the Project's Defined Process
o SP 1.1 Establish the Project's Defined Process
o SP 1.2 Use Organizational Process Assets for Planning Project Activities
o SP 1.3 Establish the Project's Work Environment
o SP 1.4 Integrate Plans
o SP 1.5 Manage the Project Using the Integrated Plans
o SP 1.6 Contribute to the Organizational Process Assets
SG 2 Coordinate and Collaborate with Relevant Stakeholders
o SP 2.1 Manage Stakeholder Involvement
o SP 2.2 Manage Dependencies
o SP 2.3 Resolve Coordination Issues
IPPD Addition:
SG 3 Apply IPPD Principles
o SP 3.1 Establish the Project's Shared Vision
o SP 3.2 Establish the Integrated Team Structure
o SP 3.3 Allocate Requirements to Integrated Teams
o SP 3.4 Establish Integrated Teams
SP 3.5 Ensure Collaboration among Interfacing Teams
Measurement and Analysis (MA)
A Support process area at Maturity Level 2
Purpose
The purpose of Measurement and Analysis (MA) is to develop and sustain a measurement capability that is used to support management information needs.
Specific Practices by Goal
SG 1 Align Measurement and Analysis Activities
o SP 1.1 Establish Measurement Objectives
o SP 1.2 Specify Measures
o SP 1.3 Specify Data Collection and Storage Procedures
o SP 1.4 Specify Analysis Procedures
SG 2 Provide Measurement Results
o SP 2.1 Collect Measurement Data
o SP 2.2 Analyze Measurement Data
o SP 2.3 Store Data and Results
SP 2.4 Communicate Results
Organizational Innovation and Deployment (OID)
A Process Management process area at Maturity Level 5
Purpose
The purpose of Organizational Innovation and Deployment (OID) is to select and deploy incremental and innovative improvements that measurably improve the organization's processes and technologies. The improvements support the organization's quality and process-performance objectives as derived from the organization's business objectives.
Specific Practices by Goal
SG 1 Select Improvements
o SP 1.1 Collect and Analyze Improvement Proposals
o SP 1.2 Identify and Analyze Innovations
o SP 1.3 Pilot Improvements
o SP 1.4 Select Improvements for Deployment
SG 2 Deploy Improvements
o SP 2.1 Plan the Deployment areas
o SP 2.2 Manage the Deployment
SP 2.3 Measure Improvement Effects
Organizational Process Definition +IPPD (OPD)
A Process Management process area at Maturity Level 3
Purpose
The purpose of Organizational Process Definition +IPPD (OPD) is to establish and maintain a usable set of organizational process assets.
Specific Practices by Goal
SG 1 Establish Organizational Process Assets
o SP 1.1 Establish Standard Processes
o SP 1.2 Establish Life-Cycle Model Descriptions
o SP 1.3 Establish Tailoring Criteria and Guidelines
o SP 1.4 Establish the Organization's Measurement Repository
o SP 1.5 Establish the Organization's Process Asset Library
IPPD Addition:
SG 2 Enable IPPD Management
o SP 2.1 Establish Empowerment Mechanisms
o SP 2.2 Establish Rules and Guidelines for Integrated Teams
o SP 2.3 Balance Team and Home Organization Responsibilities
Organizational Process Focus (OPF)
A Process Management process area at Maturity Level 3
Purpose
The purpose of Organizational Process Focus (OPF) is to plan and implement organizational process improvement based on a thorough understanding of the current strengths and weaknesses of the organization's processes and process assets.
Specific Practices by Goal
SG 1 Determine Process Improvement Opportunities
o SP 1.1 Establish Organizational Process Needs
o SP 1.2 Appraise the Organization's Processes
o SP 1.3 Identify the Organization's Process Improvements
SG 2 Plan and Implement Process Improvement Activities
o SP 2.1 Establish Process Action Plans
o SP 2.2 Implement Process Action Plans
SG 3 Deploy Organizational Process Assets and Incorporate Lessons Learned
o SP 3.1 Deploy Organizational Process Assets
o SP 3.2 Deploy Standard Processes
o SP 3.3 Monitor Implementation
o SP 3.4 Incorporate Process-Related Experiences into the Organizational Process Assets
Organizational Process Performance (OPP)
A Process Management process area at Maturity Level 4
Purpose
The purpose of Organizational Process Performance (OPP) is to establish and maintain a quantitative understanding of the performance of the organization's set of standard processes in support of quality and process-performance objectives, and to provide the process performance data, baselines, and models to quantitatively manage the organization's projects.
Specific Practices by Goal
SG 1 Establish Performance Baselines and Models
o SP 1.1 Select Processes
o SP 1.2 Establish Process Performance Measures
o SP 1.3 Establish Quality and Process Performance Objectives
o SP 1.4 Establish Process Performance Baselines
o SP 1.5 Establish Process Performance Models
Organizational Training (OT)
A Process Management process area at Maturity Level 3
Purpose
The purpose of Organizational Training (OT) is to develop the skills and knowledge of people so they can perform their roles effectively and efficiently.
Specific Practices by Goal
SG 1 Establish an Organizational Training Capability
o SP 1.1 Establish the Strategic Training Needs
o SP 1.2 Determine Which Training Needs Are the Responsibility of the Organization
o SP 1.3 Establish an Organizational Training Tactical Plan
o SP 1.4 Establish Training Capability
SG 2 Provide Necessary Training
o SP 2.1 Deliver Training
o SP 2.2 Establish Training Records
o SP 2.3 Assess Training Effectiveness
Product Integration (PI)
An Engineering process area at Maturity Level 3
Purpose
The purpose of Product Integration (PI) is to assemble the product from the product components, ensure that the product, as integrated, functions properly, and deliver the product.
Specific Practices by Goal
SG 1 Prepare for Product Integration
o SP 1.1 Determine Integration Sequence
o SP 1.2 Establish the Product Integration Environment
o SP 1.3 Establish Product Integration Procedures and Criteria
SG 2 Ensure Interface Compatibility
o SP 2.1 Review Interface Descriptions for Completeness
o SP 2.2 Manage Interfaces
SG 3 Assemble Product Components and Deliver the Product
o SP 3.1 Confirm Readiness of Product Components for Integration
o SP 3.2 Assemble Product Components
o SP 3.3 Evaluate Assembled Product Components
o SP 3.4 Package and Deliver the Product or Product Component
Project Monitoring and Control (PMC)
A Project Management process area at Maturity Level 2
Purpose
The purpose of Project Monitoring and Control (PMC) is to provide an understanding of the project's progress so that appropriate corrective actions can be taken when the project's performance deviates significantly from the plan.
Specific Practices by Goal
SG 1 Monitor Project Against Plan
o SP 1.1 Monitor Project Planning Parameters
o SP 1.2 Monitor Commitments
o SP 1.3 Monitor Project Risks
o SP 1.4 Monitor Data Management
o SP 1.5 Monitor Stakeholder Involvement
o SP 1.6 Conduct Progress Reviews
o SP 1.7 Conduct Milestone Reviews
SG 2 Manage Corrective Action to Closure
o SP 2.1 Analyze Issues
o SP 2.2 Take Corrective Action
o SP 2.3 Manage Corrective Action
Project Planning (PP)
A Project Management process area at Maturity Level 2
Purpose
The purpose of Project Planning (PP) is to establish and maintain plans that define project activities.
Specific Practices by Goal
SG 1 Establish Estimates
o SP 1.1 Estimate the Scope of the Project
o SP 1.2 Establish Estimates of Work Product and Task Attributes
o SP 1.3 Define Project Life Cycle
o SP 1.4 Determine Estimates of Effort and Cost
SG 2 Develop a Project Plan
o SP 2.1 Establish the Budget and Schedule
o SP 2.2 Identify Project Risks
o SP 2.3 Plan for Data Management
o SP 2.4 Plan for Project Resources
o SP 2.5 Plan for Needed Knowledge and Skills
o SP 2.6 Plan Stakeholder Involvement
o SP 2.7 Establish the Project Plan
SG 3 Obtain Commitment to the Plan
o SP 3.1 Review Plans that Affect the Project
o SP 3.2 Reconcile Work and Resource Levels
o SP 3.3 Obtain Plan Commitment
Process and Product Quality Assurance (PPQA)
A Support process area at Maturity Level 2
Purpose
The purpose of Process and Product Quality Assurance (PPQA) is to provide staff and management with objective insight into processes and associated work products.
Specific Practices by Goal
SG 1 Objectively Evaluate Processes and Work Products
o SP 1.1 Objectively Evaluate Processes
o SP 1.2 Objectively Evaluate Work Products and Services
SG 2 Provide Objective Insight
o SP 2.1 Communicate and Ensure Resolution of Noncompliance Issues
o SP 2.2 Establish Records
Quantitative Project Management (QPM)
A Project Management process area at Maturity Level 4
Purpose
The purpose of the Quantitative Project Management (QPM) process area is to quantitatively manage the project's defined process to achieve the project's established quality and process-performance objectives.
Specific Practices by Goal
SG 1 Quantitatively Manage the Project
o SP 1.1 Establish the Project's Objectives
o SP 1.2 Compose the Defined Processes
o SP 1.3 Select the Subprocesses that Will Be Statistically Managed
o SP 1.4 Manage Project Performance
SG 2 Statistically Manage Subprocess Performance
o SP 2.1 Select Measures and Analytic Techniques
o SP 2.2 Apply Statistical Methods to Understand Variation
o SP 2.3 Monitor Performance of the Selected Subprocesses
o SP 2.4 Record Statistical Management Data
Requirements Development (RD)
An Engineering process area at Maturity Level 3
Purpose
The purpose of Requirements Development (RD) is to produce and analyze customer, product, and product-component requirements.
Specific Practices by Goal
SG 1 Develop Customer Requirements
o SP 1.1 Elicit Needs
o SP 1.2 Develop the Customer Requirements
SG 2 Develop Product Requirements
o SP 2.1 Establish Product and Product-Component Requirements
o SP 2.2 Allocate Product-Component Requirements
o SP 2.3 Identify Interface Requirements
SG 3 Analyze and Validate Requirements
o SP 3.1 Establish Operational Concepts and Scenarios
o SP 3.2 Establish a Definition of Required Functionality
o SP 3.3 Analyze Requirements
o SP 3.4 Analyze Requirements to Achieve Balance
o SP 3.5 Validate Requirements
Requirements Management (REQM)
An Engineering process area at Maturity Level 2
Purpose
The purpose of Requirements Management (REQM) is to manage the requirements of the project's products and product components and to identify inconsistencies between those requirements and the project's plans and work products.
Specific Practices by Goal
SG 1 Manage Requirements
o SP 1.1 Obtain an Understanding of Requirements
o SP 1.2 Obtain Commitment to Requirements
o SP 1.3 Manage Requirements Changes
o SP 1.4 Maintain Bidirectional Traceability of Requirements
o SP 1.5 Identify Inconsistencies between Project Work and Requirements
o
Risk Management (RSKM)
A Project Management process area at Maturity Level 3
Purpose
The purpose of Risk Management (RSKM) is to identify potential problems before they occur so that risk-handling activities can be planned and invoked as needed across the life of the product or project to mitigate adverse impacts on achieving objectives.
Specific Practices by Goal
SG 1 Prepare for Risk Management
o SP 1.1 Determine Risk Sources and Categories
o SP 1.2 Define Risk Parameters
o SP 1.3 Establish a Risk Management Strategy
SG 2 Identify and Analyze Risks
o SP 2.1 Identify Risks
o SP 2.2 Evaluate, Categorize, and Prioritize Risks
SG 3 Mitigate Risks
o SP 3.1 Develop Risk Mitigation Plans
o SP 3.2 Implement Risk Mitigation Plans
Supplier Agreement Management (SAM)
A Project Management process area at Maturity Level 2
Purpose
The purpose of Supplier Agreement Management (SAM) is to manage the acquisition of products from suppliers for which there exists a formal agreement.
Specific Practices by Goal
SG 1 Establish Supplier Agreements
o SP 1.1 Determine Acquisition Type
o SP 1.2 Select Suppliers
o SP 1.3 Establish Supplier Agreements
SG 2 Satisfy Supplier Agreements
o SP 2.1 Execute the Supplier Agreement
o SP 2.2 Monitor Selected Supplier Processes
o SP 2.3 Evaluate Selected Supplier Work Products
o SP 2.4 Accept the Acquired Product
o SP 2.5 Transition Products
o
Technical Solution (TS)
An Engineering process area at Maturity Level 3
Purpose
The purpose of Technical Solution (TS) is to design, develop, and implement solutions to requirements. Solutions, designs, and implementations encompass products, product components, and product-related life-cycle processes either singly or in combination as appropriate.
Specific Practices by Goal
SG 1 Select Product-Component Solutions
o SP 1.1 Develop Alternative Solutions and Selection Criteria
o SP 1.2 Select Product Component Solutions
SG 2 Develop the Design
o SP 2.1 Design the Product or Product Component
o SP 2.2 Establish a Technical Data Package
o SP 2.3 Design Interfaces Using Criteria
o SP 2.4 Perform Make, Buy, or Reuse Analysis
SG 3 Implement the Product Design
o SP 3.1 Implement the Design
o SP 3.2 Develop Product Support Documentation
Validation (VAL)
An Engineering process area at Maturity Level 3
Purpose
The purpose of Validation (VAL) is to demonstrate that a product or product component fulfills its intended use when placed in its intended environment.
Specific Practices by Goal
SG 1 Prepare for Validation
o SP 1.1 Select Products for Validation
o SP 1.2 Establish the Validation Environment
o SP 1.3 Establish Validation Procedures and Criteria
SG 2 Validate Product or Product Components
o SP 2.1 Perform Validation
o SP 2.2 Analyze Validation Results.
Verification (VER)
• An Engineering process area at Maturity Level 3
Purpose
The purpose of Verification (VER) is to ensure that selected work products meet their specified requirements.
Specific Practices by Goal
• SG 1 Prepare for Verification
o SP 1.1 Select Work Products for Verification
o SP 1.2 Establish the Verification Environment
o SP 1.3 Establish Verification Procedures and Criteria
• SG 2 Perform Peer Reviews
o SP 2.1 Prepare for Peer Reviews
o SP 2.2 Conduct Peer Reviews
o SP 2.3 Analyze Peer Review Data
• SG 3 Verify Selected Work Products
o SP 3.1 Perform Verification
o SP 3.2 Analyze Verification Results
The CMMI contains 22 process areas indicating the aspects of product development that are to be covered by company processes.
Causal Analysis and Resolution (CAR)
A Support process area at Maturity Level 5
Purpose
The purpose of Causal Analysis and Resolution (CAR) is to identify causes of defects and other problems and take action to prevent them from occurring in the future.
Specific Practices by Goal
SG 1 Determine Causes of Defects
o SP 1.1 Select Defect Data for Analysis
o SP 1.2 Analyze Causes
SG 2 Address Causes of Defects
o SP 2.1 Implement the Action Proposals
o SP 2.2 Evaluate the Effect of Changes
o SP 2.3 Record Data
Configuration Management (CM)
A Support process area at Maturity Level 2
Purpose
The purpose of Configuration Management (CM) is to establish and maintain the integrity of work products using configuration identification, configuration control, configuration status accounting, and configuration audits.
Specific Practices by Goal
SG 1 Establish Baselines
o SP 1.1 Identify Configuration Items
o SP 1.2 Establish a Configuration Management System
o SP 1.3 Create or Release Baselines
SG 2 Track and Control Changes
o SP 2.1 Track Change Requests
o SP 2.2 Control Configuration Items
SG 3 Establish Integrity
o SP 3.1 Establish Configuration Management Records
SP 3.2 Perform Configuration Audits
Decision Analysis and Resolution (DAR)
A Support process area at Maturity Level 3
Purpose
The purpose of Decision Analysis and Resolution (DAR) is to analyze possible decisions using a formal evaluation process that evaluates identified alternatives against established criteria.
Specific Practices by Goal
SG 1 Evaluate Alternatives
o SP 1.1 Establish Guidelines for Decision Analysis
o SP 1.2 Establish Evaluation Criteria
o SP 1.3 Identify Alternative Solutions
o SP 1.4 Select Evaluation Methods
o SP 1.5 Evaluate Alternatives
o SP 1.6 Select Solutions
Integrated Project Management +IPPD (IPM)
A Project Management process area at Maturity Level 3
Purpose
The purpose of Integrated Project Management +IPPD (IPM) is to establish and manage the project and the involvement of the relevant stakeholders according to an integrated and defined process that is tailored from the organization's set of standard processes.
Specific Practices by Goal
SG 1 Use the Project's Defined Process
o SP 1.1 Establish the Project's Defined Process
o SP 1.2 Use Organizational Process Assets for Planning Project Activities
o SP 1.3 Establish the Project's Work Environment
o SP 1.4 Integrate Plans
o SP 1.5 Manage the Project Using the Integrated Plans
o SP 1.6 Contribute to the Organizational Process Assets
SG 2 Coordinate and Collaborate with Relevant Stakeholders
o SP 2.1 Manage Stakeholder Involvement
o SP 2.2 Manage Dependencies
o SP 2.3 Resolve Coordination Issues
IPPD Addition:
SG 3 Apply IPPD Principles
o SP 3.1 Establish the Project's Shared Vision
o SP 3.2 Establish the Integrated Team Structure
o SP 3.3 Allocate Requirements to Integrated Teams
o SP 3.4 Establish Integrated Teams
SP 3.5 Ensure Collaboration among Interfacing Teams
Measurement and Analysis (MA)
A Support process area at Maturity Level 2
Purpose
The purpose of Measurement and Analysis (MA) is to develop and sustain a measurement capability that is used to support management information needs.
Specific Practices by Goal
SG 1 Align Measurement and Analysis Activities
o SP 1.1 Establish Measurement Objectives
o SP 1.2 Specify Measures
o SP 1.3 Specify Data Collection and Storage Procedures
o SP 1.4 Specify Analysis Procedures
SG 2 Provide Measurement Results
o SP 2.1 Collect Measurement Data
o SP 2.2 Analyze Measurement Data
o SP 2.3 Store Data and Results
SP 2.4 Communicate Results
Organizational Innovation and Deployment (OID)
A Process Management process area at Maturity Level 5
Purpose
The purpose of Organizational Innovation and Deployment (OID) is to select and deploy incremental and innovative improvements that measurably improve the organization's processes and technologies. The improvements support the organization's quality and process-performance objectives as derived from the organization's business objectives.
Specific Practices by Goal
SG 1 Select Improvements
o SP 1.1 Collect and Analyze Improvement Proposals
o SP 1.2 Identify and Analyze Innovations
o SP 1.3 Pilot Improvements
o SP 1.4 Select Improvements for Deployment
SG 2 Deploy Improvements
o SP 2.1 Plan the Deployment areas
o SP 2.2 Manage the Deployment
SP 2.3 Measure Improvement Effects
Organizational Process Definition +IPPD (OPD)
A Process Management process area at Maturity Level 3
Purpose
The purpose of Organizational Process Definition +IPPD (OPD) is to establish and maintain a usable set of organizational process assets.
Specific Practices by Goal
SG 1 Establish Organizational Process Assets
o SP 1.1 Establish Standard Processes
o SP 1.2 Establish Life-Cycle Model Descriptions
o SP 1.3 Establish Tailoring Criteria and Guidelines
o SP 1.4 Establish the Organization's Measurement Repository
o SP 1.5 Establish the Organization's Process Asset Library
IPPD Addition:
SG 2 Enable IPPD Management
o SP 2.1 Establish Empowerment Mechanisms
o SP 2.2 Establish Rules and Guidelines for Integrated Teams
o SP 2.3 Balance Team and Home Organization Responsibilities
Organizational Process Focus (OPF)
A Process Management process area at Maturity Level 3
Purpose
The purpose of Organizational Process Focus (OPF) is to plan and implement organizational process improvement based on a thorough understanding of the current strengths and weaknesses of the organization's processes and process assets.
Specific Practices by Goal
SG 1 Determine Process Improvement Opportunities
o SP 1.1 Establish Organizational Process Needs
o SP 1.2 Appraise the Organization's Processes
o SP 1.3 Identify the Organization's Process Improvements
SG 2 Plan and Implement Process Improvement Activities
o SP 2.1 Establish Process Action Plans
o SP 2.2 Implement Process Action Plans
SG 3 Deploy Organizational Process Assets and Incorporate Lessons Learned
o SP 3.1 Deploy Organizational Process Assets
o SP 3.2 Deploy Standard Processes
o SP 3.3 Monitor Implementation
o SP 3.4 Incorporate Process-Related Experiences into the Organizational Process Assets
Organizational Process Performance (OPP)
A Process Management process area at Maturity Level 4
Purpose
The purpose of Organizational Process Performance (OPP) is to establish and maintain a quantitative understanding of the performance of the organization's set of standard processes in support of quality and process-performance objectives, and to provide the process performance data, baselines, and models to quantitatively manage the organization's projects.
Specific Practices by Goal
SG 1 Establish Performance Baselines and Models
o SP 1.1 Select Processes
o SP 1.2 Establish Process Performance Measures
o SP 1.3 Establish Quality and Process Performance Objectives
o SP 1.4 Establish Process Performance Baselines
o SP 1.5 Establish Process Performance Models
Organizational Training (OT)
A Process Management process area at Maturity Level 3
Purpose
The purpose of Organizational Training (OT) is to develop the skills and knowledge of people so they can perform their roles effectively and efficiently.
Specific Practices by Goal
SG 1 Establish an Organizational Training Capability
o SP 1.1 Establish the Strategic Training Needs
o SP 1.2 Determine Which Training Needs Are the Responsibility of the Organization
o SP 1.3 Establish an Organizational Training Tactical Plan
o SP 1.4 Establish Training Capability
SG 2 Provide Necessary Training
o SP 2.1 Deliver Training
o SP 2.2 Establish Training Records
o SP 2.3 Assess Training Effectiveness
Product Integration (PI)
An Engineering process area at Maturity Level 3
Purpose
The purpose of Product Integration (PI) is to assemble the product from the product components, ensure that the product, as integrated, functions properly, and deliver the product.
Specific Practices by Goal
SG 1 Prepare for Product Integration
o SP 1.1 Determine Integration Sequence
o SP 1.2 Establish the Product Integration Environment
o SP 1.3 Establish Product Integration Procedures and Criteria
SG 2 Ensure Interface Compatibility
o SP 2.1 Review Interface Descriptions for Completeness
o SP 2.2 Manage Interfaces
SG 3 Assemble Product Components and Deliver the Product
o SP 3.1 Confirm Readiness of Product Components for Integration
o SP 3.2 Assemble Product Components
o SP 3.3 Evaluate Assembled Product Components
o SP 3.4 Package and Deliver the Product or Product Component
Project Monitoring and Control (PMC)
A Project Management process area at Maturity Level 2
Purpose
The purpose of Project Monitoring and Control (PMC) is to provide an understanding of the project's progress so that appropriate corrective actions can be taken when the project's performance deviates significantly from the plan.
Specific Practices by Goal
SG 1 Monitor Project Against Plan
o SP 1.1 Monitor Project Planning Parameters
o SP 1.2 Monitor Commitments
o SP 1.3 Monitor Project Risks
o SP 1.4 Monitor Data Management
o SP 1.5 Monitor Stakeholder Involvement
o SP 1.6 Conduct Progress Reviews
o SP 1.7 Conduct Milestone Reviews
SG 2 Manage Corrective Action to Closure
o SP 2.1 Analyze Issues
o SP 2.2 Take Corrective Action
o SP 2.3 Manage Corrective Action
Project Planning (PP)
A Project Management process area at Maturity Level 2
Purpose
The purpose of Project Planning (PP) is to establish and maintain plans that define project activities.
Specific Practices by Goal
SG 1 Establish Estimates
o SP 1.1 Estimate the Scope of the Project
o SP 1.2 Establish Estimates of Work Product and Task Attributes
o SP 1.3 Define Project Life Cycle
o SP 1.4 Determine Estimates of Effort and Cost
SG 2 Develop a Project Plan
o SP 2.1 Establish the Budget and Schedule
o SP 2.2 Identify Project Risks
o SP 2.3 Plan for Data Management
o SP 2.4 Plan for Project Resources
o SP 2.5 Plan for Needed Knowledge and Skills
o SP 2.6 Plan Stakeholder Involvement
o SP 2.7 Establish the Project Plan
SG 3 Obtain Commitment to the Plan
o SP 3.1 Review Plans that Affect the Project
o SP 3.2 Reconcile Work and Resource Levels
o SP 3.3 Obtain Plan Commitment
Process and Product Quality Assurance (PPQA)
A Support process area at Maturity Level 2
Purpose
The purpose of Process and Product Quality Assurance (PPQA) is to provide staff and management with objective insight into processes and associated work products.
Specific Practices by Goal
SG 1 Objectively Evaluate Processes and Work Products
o SP 1.1 Objectively Evaluate Processes
o SP 1.2 Objectively Evaluate Work Products and Services
SG 2 Provide Objective Insight
o SP 2.1 Communicate and Ensure Resolution of Noncompliance Issues
o SP 2.2 Establish Records
Quantitative Project Management (QPM)
A Project Management process area at Maturity Level 4
Purpose
The purpose of the Quantitative Project Management (QPM) process area is to quantitatively manage the project's defined process to achieve the project's established quality and process-performance objectives.
Specific Practices by Goal
SG 1 Quantitatively Manage the Project
o SP 1.1 Establish the Project's Objectives
o SP 1.2 Compose the Defined Processes
o SP 1.3 Select the Subprocesses that Will Be Statistically Managed
o SP 1.4 Manage Project Performance
SG 2 Statistically Manage Subprocess Performance
o SP 2.1 Select Measures and Analytic Techniques
o SP 2.2 Apply Statistical Methods to Understand Variation
o SP 2.3 Monitor Performance of the Selected Subprocesses
o SP 2.4 Record Statistical Management Data
Requirements Development (RD)
An Engineering process area at Maturity Level 3
Purpose
The purpose of Requirements Development (RD) is to produce and analyze customer, product, and product-component requirements.
Specific Practices by Goal
SG 1 Develop Customer Requirements
o SP 1.1 Elicit Needs
o SP 1.2 Develop the Customer Requirements
SG 2 Develop Product Requirements
o SP 2.1 Establish Product and Product-Component Requirements
o SP 2.2 Allocate Product-Component Requirements
o SP 2.3 Identify Interface Requirements
SG 3 Analyze and Validate Requirements
o SP 3.1 Establish Operational Concepts and Scenarios
o SP 3.2 Establish a Definition of Required Functionality
o SP 3.3 Analyze Requirements
o SP 3.4 Analyze Requirements to Achieve Balance
o SP 3.5 Validate Requirements
Requirements Management (REQM)
An Engineering process area at Maturity Level 2
Purpose
The purpose of Requirements Management (REQM) is to manage the requirements of the project's products and product components and to identify inconsistencies between those requirements and the project's plans and work products.
Specific Practices by Goal
SG 1 Manage Requirements
o SP 1.1 Obtain an Understanding of Requirements
o SP 1.2 Obtain Commitment to Requirements
o SP 1.3 Manage Requirements Changes
o SP 1.4 Maintain Bidirectional Traceability of Requirements
o SP 1.5 Identify Inconsistencies between Project Work and Requirements
o
Risk Management (RSKM)
A Project Management process area at Maturity Level 3
Purpose
The purpose of Risk Management (RSKM) is to identify potential problems before they occur so that risk-handling activities can be planned and invoked as needed across the life of the product or project to mitigate adverse impacts on achieving objectives.
Specific Practices by Goal
SG 1 Prepare for Risk Management
o SP 1.1 Determine Risk Sources and Categories
o SP 1.2 Define Risk Parameters
o SP 1.3 Establish a Risk Management Strategy
SG 2 Identify and Analyze Risks
o SP 2.1 Identify Risks
o SP 2.2 Evaluate, Categorize, and Prioritize Risks
SG 3 Mitigate Risks
o SP 3.1 Develop Risk Mitigation Plans
o SP 3.2 Implement Risk Mitigation Plans
Supplier Agreement Management (SAM)
A Project Management process area at Maturity Level 2
Purpose
The purpose of Supplier Agreement Management (SAM) is to manage the acquisition of products from suppliers for which there exists a formal agreement.
Specific Practices by Goal
SG 1 Establish Supplier Agreements
o SP 1.1 Determine Acquisition Type
o SP 1.2 Select Suppliers
o SP 1.3 Establish Supplier Agreements
SG 2 Satisfy Supplier Agreements
o SP 2.1 Execute the Supplier Agreement
o SP 2.2 Monitor Selected Supplier Processes
o SP 2.3 Evaluate Selected Supplier Work Products
o SP 2.4 Accept the Acquired Product
o SP 2.5 Transition Products
o
Technical Solution (TS)
An Engineering process area at Maturity Level 3
Purpose
The purpose of Technical Solution (TS) is to design, develop, and implement solutions to requirements. Solutions, designs, and implementations encompass products, product components, and product-related life-cycle processes either singly or in combination as appropriate.
Specific Practices by Goal
SG 1 Select Product-Component Solutions
o SP 1.1 Develop Alternative Solutions and Selection Criteria
o SP 1.2 Select Product Component Solutions
SG 2 Develop the Design
o SP 2.1 Design the Product or Product Component
o SP 2.2 Establish a Technical Data Package
o SP 2.3 Design Interfaces Using Criteria
o SP 2.4 Perform Make, Buy, or Reuse Analysis
SG 3 Implement the Product Design
o SP 3.1 Implement the Design
o SP 3.2 Develop Product Support Documentation
Validation (VAL)
An Engineering process area at Maturity Level 3
Purpose
The purpose of Validation (VAL) is to demonstrate that a product or product component fulfills its intended use when placed in its intended environment.
Specific Practices by Goal
SG 1 Prepare for Validation
o SP 1.1 Select Products for Validation
o SP 1.2 Establish the Validation Environment
o SP 1.3 Establish Validation Procedures and Criteria
SG 2 Validate Product or Product Components
o SP 2.1 Perform Validation
o SP 2.2 Analyze Validation Results.
Verification (VER)
• An Engineering process area at Maturity Level 3
Purpose
The purpose of Verification (VER) is to ensure that selected work products meet their specified requirements.
Specific Practices by Goal
• SG 1 Prepare for Verification
o SP 1.1 Select Work Products for Verification
o SP 1.2 Establish the Verification Environment
o SP 1.3 Establish Verification Procedures and Criteria
• SG 2 Perform Peer Reviews
o SP 2.1 Prepare for Peer Reviews
o SP 2.2 Conduct Peer Reviews
o SP 2.3 Analyze Peer Review Data
• SG 3 Verify Selected Work Products
o SP 3.1 Perform Verification
o SP 3.2 Analyze Verification Results
“Capital Market”
Capital Market Theory
In studying the capital market theory we deal with issues like the role of the capital markets, the major capital markets in the US, the initial public offerings and the role of the venture capital in capital markets, financial innovation and markets in derivative instruments, the role of securities and the exchange commission, the role of the federal reserve system, role of the US Treasury and the regulatory requirements on the capital market.
The market where investment funds like bonds, equities and mortgages are traded is known as the capital market . The financial instruments that have short or medium term maturity periods are dealt in the money market whereas the financial instruments that have long maturity periods are dealt in the capital market.
The issues that have been mentioned above to explain the capital market theory may be discussed under the following heads:
Role of the Capital Market
The main function of the capital market is to channelize investments from the investors who have surplus funds to the investors who have deficit funds. The different types of financial instruments that are traded in the capital markets are equity instruments, credit market instruments, insurance instruments, foreign exchange instruments, hybrid instruments and derivative instruments. The money market instruments that are traded in the capital market are Treasury Bills, federal agency securities, federal funds, negotiable certificates of deposits, commercial paper, bankers' acceptance, repurchase agreements, Eurocurrency deposits, Eurocurrency loans, futures and options.
Capital market in the US
The capital market in the US is very advanced and uses very modern technologies in its operation. The capital market instruments are either traded in the Over-the – Counter markets or in the exchanges. The New York Stock Exchange is the oldest and the most prominent exchange in the US capital Market.
Initial Public Offering and the role of Venture Capital in the capital market
The companies raise their long term capital through the issue of shares that are floated in the capital market in the form of Initial Public Offering. The venture capital are the funds that are raised in the capital market via the specialized operators. This is also a very important source of finance for the innovative companies.
Markets in Derivatives
The derivatives like the options, futures, credit derivatives etc are traded in the capital markets.
Role of the Federal Reserve System and the US Treasury
The Federal Reserve System plays an important role in the capital market by providing liquidity and managing the credit conditions in the US financial system. The US Treasury operations seal the gap between the cash inflow and outflow, thereby, providing liquidity to the US capital Market.
Capital Market Investment
Capital market investment takes place through the bond market and the stock market . The capital marke t is basically the financial pool in which different companies as well as the government can raise long term funds.
Capital market investment that takes place through the bond and the stock market may be elucidated in the following heads.
Capital market investments in the stock market
The stock market is basically the trading ground capital market investment in the following:
• company stocks
• derivatives
• other securities
The capital market investments in the stock market take place by
• small individual stock investors
• large hedge fund traders.
The capital market investments can occur either in
• The physical market by a method known as the open outcry. The New York Stock Exchange is a physical market or
• Trading can also occur in the virtual exchange where trading is done in the computer network. NASDAQ is a virtual exchange.
Investments in the stock market helps the large companies to raise their long term capital . The investors in the stock market have the liberty to buy or sell the stock that they are holding at their own discretion unlike the case of government securities , bonds or real estate . The stock exchanges basically function as the clearing house for such liquid transactions. The capital market investments in the stock market are also done through the derivative instruments like the stock options and the stock futures. The derivatives are the financial instruments whose value is determined by the price of the underlying asset.
Capital Market Investments in the Bond Market
The bond market is a financial market where the participants buy and sell debt securities . The bond market is also differently known as the debt, credit or fixed income market. There are different types of bond markets based on the different types of bonds that are traded. They are :
• corporate
• government and agency >
• municipal
• bonds backed by mortgages , assets
• Collateralized Debt Obligation .
The bonds , except for the corporate bonds do not have formal exchanges but are traded over-the- counter . Individual investors are attracted to the bond market and make investments through the bond funds, closed-end-funds or the unit investment trusts . The net inflows of total bond funds increased by 97% in 2006 than that in 2005. Another way of investing directly in the bond issue is the Exchange-traded-funds .
The capital market investment in the bond market is done by
• institutional investors
• governments, traders and
• individuals.
The Global Capital Market deals with mergers and acquisitions, strategic equity partnering, management buyout services ,acquisition search services, corporate debt and equity and financial restructuring. Each of these terms may be explained under the following heads:
Mergers and Acquisitions
When the shareholder of a successful business is deciding on the sale of his business, then considering his operating experience is not enough. For making such decisions expertise is required. A balanced approach need to be taken in making such decisions and the Global Capital Market offers invaluable suggestions in the case of such mergers and acquisitions.
Seller Representation
The process involving the sale of a business is very complex and dynamic .The Global Capital Market provides an alternative that would be most profitable. This is done after a thorough evaluation of the existing business. The Global Capital Market provides the best mean that could maximize the value in the selling process.
Strategic Equity Partnering
There are participants in the Global Capital Market who do not want to sell their business. In such cases the Global Capital Market offers Strategic Equity Partnership . By this process the business owner can gain liquidity and at the same time can have sufficient control over the business operation. The investor can enjoy quality management accompanied by capital appreciation.
Global Capital Market
The Global Capital Market , in this way, develops partnerships that are profitable and at the same time profitable.
Management Buyout Services
Management is of key importance in running a company successful. The Global Capital Market Helps the management to fulfill their dreams of acquiring the target companies. The following services are provided by the the Global Capital market in order to fulfill this goal.
• Creation of close team support with other professional advisors for the purpose of coordination of the acquisition strategy
• Negotiation and structuring the deals
• Designing and Sourcing the necessary finances in order to close the deal.
• For long term success post transaction capital and support is provided.
The Global Capital Market has a huge database which help to reduce the search cost in the matters of acquisition. With the help of the large network of relationships the Global capital Market can help in making the right deal at the right price.
Corporate Debt and Equity
The Global Capital Market helps in well structured and well priced financing raised through the global equity and debts.
Financial Restructuring
The Global Capital Market Can help the creditors, debtors and equity holders in the following ways:
• By renegotiating the existing debt and loan agreements
• By raising additional debt and equity
• By divesting corporate assets
• By arranging mergers
• By negotiating workout plans with creditors
• By restructuring debt to match the cash generating potential
In studying the capital market theory we deal with issues like the role of the capital markets, the major capital markets in the US, the initial public offerings and the role of the venture capital in capital markets, financial innovation and markets in derivative instruments, the role of securities and the exchange commission, the role of the federal reserve system, role of the US Treasury and the regulatory requirements on the capital market.
The market where investment funds like bonds, equities and mortgages are traded is known as the capital market . The financial instruments that have short or medium term maturity periods are dealt in the money market whereas the financial instruments that have long maturity periods are dealt in the capital market.
The issues that have been mentioned above to explain the capital market theory may be discussed under the following heads:
Role of the Capital Market
The main function of the capital market is to channelize investments from the investors who have surplus funds to the investors who have deficit funds. The different types of financial instruments that are traded in the capital markets are equity instruments, credit market instruments, insurance instruments, foreign exchange instruments, hybrid instruments and derivative instruments. The money market instruments that are traded in the capital market are Treasury Bills, federal agency securities, federal funds, negotiable certificates of deposits, commercial paper, bankers' acceptance, repurchase agreements, Eurocurrency deposits, Eurocurrency loans, futures and options.
Capital market in the US
The capital market in the US is very advanced and uses very modern technologies in its operation. The capital market instruments are either traded in the Over-the – Counter markets or in the exchanges. The New York Stock Exchange is the oldest and the most prominent exchange in the US capital Market.
Initial Public Offering and the role of Venture Capital in the capital market
The companies raise their long term capital through the issue of shares that are floated in the capital market in the form of Initial Public Offering. The venture capital are the funds that are raised in the capital market via the specialized operators. This is also a very important source of finance for the innovative companies.
Markets in Derivatives
The derivatives like the options, futures, credit derivatives etc are traded in the capital markets.
Role of the Federal Reserve System and the US Treasury
The Federal Reserve System plays an important role in the capital market by providing liquidity and managing the credit conditions in the US financial system. The US Treasury operations seal the gap between the cash inflow and outflow, thereby, providing liquidity to the US capital Market.
Capital Market Investment
Capital market investment takes place through the bond market and the stock market . The capital marke t is basically the financial pool in which different companies as well as the government can raise long term funds.
Capital market investment that takes place through the bond and the stock market may be elucidated in the following heads.
Capital market investments in the stock market
The stock market is basically the trading ground capital market investment in the following:
• company stocks
• derivatives
• other securities
The capital market investments in the stock market take place by
• small individual stock investors
• large hedge fund traders.
The capital market investments can occur either in
• The physical market by a method known as the open outcry. The New York Stock Exchange is a physical market or
• Trading can also occur in the virtual exchange where trading is done in the computer network. NASDAQ is a virtual exchange.
Investments in the stock market helps the large companies to raise their long term capital . The investors in the stock market have the liberty to buy or sell the stock that they are holding at their own discretion unlike the case of government securities , bonds or real estate . The stock exchanges basically function as the clearing house for such liquid transactions. The capital market investments in the stock market are also done through the derivative instruments like the stock options and the stock futures. The derivatives are the financial instruments whose value is determined by the price of the underlying asset.
Capital Market Investments in the Bond Market
The bond market is a financial market where the participants buy and sell debt securities . The bond market is also differently known as the debt, credit or fixed income market. There are different types of bond markets based on the different types of bonds that are traded. They are :
• corporate
• government and agency >
• municipal
• bonds backed by mortgages , assets
• Collateralized Debt Obligation .
The bonds , except for the corporate bonds do not have formal exchanges but are traded over-the- counter . Individual investors are attracted to the bond market and make investments through the bond funds, closed-end-funds or the unit investment trusts . The net inflows of total bond funds increased by 97% in 2006 than that in 2005. Another way of investing directly in the bond issue is the Exchange-traded-funds .
The capital market investment in the bond market is done by
• institutional investors
• governments, traders and
• individuals.
The Global Capital Market deals with mergers and acquisitions, strategic equity partnering, management buyout services ,acquisition search services, corporate debt and equity and financial restructuring. Each of these terms may be explained under the following heads:
Mergers and Acquisitions
When the shareholder of a successful business is deciding on the sale of his business, then considering his operating experience is not enough. For making such decisions expertise is required. A balanced approach need to be taken in making such decisions and the Global Capital Market offers invaluable suggestions in the case of such mergers and acquisitions.
Seller Representation
The process involving the sale of a business is very complex and dynamic .The Global Capital Market provides an alternative that would be most profitable. This is done after a thorough evaluation of the existing business. The Global Capital Market provides the best mean that could maximize the value in the selling process.
Strategic Equity Partnering
There are participants in the Global Capital Market who do not want to sell their business. In such cases the Global Capital Market offers Strategic Equity Partnership . By this process the business owner can gain liquidity and at the same time can have sufficient control over the business operation. The investor can enjoy quality management accompanied by capital appreciation.
Global Capital Market
The Global Capital Market , in this way, develops partnerships that are profitable and at the same time profitable.
Management Buyout Services
Management is of key importance in running a company successful. The Global Capital Market Helps the management to fulfill their dreams of acquiring the target companies. The following services are provided by the the Global Capital market in order to fulfill this goal.
• Creation of close team support with other professional advisors for the purpose of coordination of the acquisition strategy
• Negotiation and structuring the deals
• Designing and Sourcing the necessary finances in order to close the deal.
• For long term success post transaction capital and support is provided.
The Global Capital Market has a huge database which help to reduce the search cost in the matters of acquisition. With the help of the large network of relationships the Global capital Market can help in making the right deal at the right price.
Corporate Debt and Equity
The Global Capital Market helps in well structured and well priced financing raised through the global equity and debts.
Financial Restructuring
The Global Capital Market Can help the creditors, debtors and equity holders in the following ways:
• By renegotiating the existing debt and loan agreements
• By raising additional debt and equity
• By divesting corporate assets
• By arranging mergers
• By negotiating workout plans with creditors
• By restructuring debt to match the cash generating potential
“Work Breakdown Structure”
Work Break Down Structure
The Work Breakdown Structure is a tree structure, which shows a subdivision of effort required to achieve an objective; for example a program, project, and contract. In a project or contract, the WBS is developed by starting with the end objective and successively subdividing it into manageable components in terms of size, duration, and responsibility (e.g., systems, subsystems, components, tasks, subtasks, and work packages) which include all steps necessary to achieve the objective.
The Work Breakdown Structure provides a common framework for the natural development of the overall planning and control of a contract and is the basis for dividing work into definable increments from which the statement of work can be developed and technical, schedule, cost, and labor hour reporting can be established.
A work breakdown structure permits summing of subordinate costs for tasks, materials, etc., into their successively higher level “parent” tasks, materials, etc. For each element of the work breakdown structure, a description of the task to be performed is generated. This technique (sometimes called a System Breakdown Structure) is used to define and organize the total scope of a project.
The WBS is organized around the primary products of the project (or planned outcomes) instead of the work needed to produce the products (planned actions). Since the planned outcomes are the desired ends of the project, they form a relatively stable set of categories in which the costs of the planned actions needed to achieve them can be collected. A well-designed WBS makes it easy to assign each project activity to one and only one terminal element of the WBS. In addition to its function in cost accounting, the WBS also helps map requirements from one level of system specification to another, for example a requirements cross reference matrix mapping functional requirements to high level or low level design documents.
Work Breakdown Structure in Theory
Unfortunately, too many academics and academic textbooks teach the work breakdown structure as a big "to do" list. They ignore the impact on the PM's ability to track progress and make good assignments. Many people who have taken these project management classes spend very little time learning the work breakdown structure and, as a result, think it is just a list with "to dos" for every team member. The results are disastrous as we will discuss below.
Work Breakdown Structure in Practice
In practice, many project managers follow a "to do" list approach as discussed above. The result is that their assignments for the team members are vague and the performance expectations are unclear. On those project teams the estimates are always inaccurate because it is very hard to estimate the work or duration of a "to do" list item when the deliverable is too general. As a consequence, the team members are guessing about what is expected and routinely have to redo assignments when their guess doesn't meet the current performance expectation of the project manager. It is this "to do" list approach to the work breakdown structure that is one of the major causes of the overall 70% project failure rate.
WBS "Best Practices"In the Real World
In the typical situation project managers face in the real world, we have no formal authority over the team. But one thing we can do is decompose the work breakdown structure into a measured definition of success on each deliverable. No matter how limited our authority over the team, we can still follow best practices on the WBS. We start from the overall project acceptance criteria which is a measurable definition of success. Then we continue the decomposition, identifying the major deliverables and defining success on each one in quantified terms. We don't want to have to guess about whether we produced the right deliverable; we want to be able to measure it at the end of the work. As an example, a task such as, "improve service on customer phone calls," is a typical "to do" list item that might be included in a work breakdown structure. It makes a terrible assignment and invites scope creep. On the other hand, if we decompose our deliverables properly, that work would have a metric defining success such as: "95% of the customers experience hold time of less than 15 seconds." It is difficult to come up with these measured outcomes primarily because we have to decide exactly what we want. However, the benefits are enormous in terms of more accurate estimating, more confident team members who know what success is before they start work and tighter control of the scope because the precision of these definitions helps us keep out unnecessary work.
The Work Breakdown Structure is a tree structure, which shows a subdivision of effort required to achieve an objective; for example a program, project, and contract. In a project or contract, the WBS is developed by starting with the end objective and successively subdividing it into manageable components in terms of size, duration, and responsibility (e.g., systems, subsystems, components, tasks, subtasks, and work packages) which include all steps necessary to achieve the objective.
The Work Breakdown Structure provides a common framework for the natural development of the overall planning and control of a contract and is the basis for dividing work into definable increments from which the statement of work can be developed and technical, schedule, cost, and labor hour reporting can be established.
A work breakdown structure permits summing of subordinate costs for tasks, materials, etc., into their successively higher level “parent” tasks, materials, etc. For each element of the work breakdown structure, a description of the task to be performed is generated. This technique (sometimes called a System Breakdown Structure) is used to define and organize the total scope of a project.
The WBS is organized around the primary products of the project (or planned outcomes) instead of the work needed to produce the products (planned actions). Since the planned outcomes are the desired ends of the project, they form a relatively stable set of categories in which the costs of the planned actions needed to achieve them can be collected. A well-designed WBS makes it easy to assign each project activity to one and only one terminal element of the WBS. In addition to its function in cost accounting, the WBS also helps map requirements from one level of system specification to another, for example a requirements cross reference matrix mapping functional requirements to high level or low level design documents.
Work Breakdown Structure in Theory
Unfortunately, too many academics and academic textbooks teach the work breakdown structure as a big "to do" list. They ignore the impact on the PM's ability to track progress and make good assignments. Many people who have taken these project management classes spend very little time learning the work breakdown structure and, as a result, think it is just a list with "to dos" for every team member. The results are disastrous as we will discuss below.
Work Breakdown Structure in Practice
In practice, many project managers follow a "to do" list approach as discussed above. The result is that their assignments for the team members are vague and the performance expectations are unclear. On those project teams the estimates are always inaccurate because it is very hard to estimate the work or duration of a "to do" list item when the deliverable is too general. As a consequence, the team members are guessing about what is expected and routinely have to redo assignments when their guess doesn't meet the current performance expectation of the project manager. It is this "to do" list approach to the work breakdown structure that is one of the major causes of the overall 70% project failure rate.
WBS "Best Practices"In the Real World
In the typical situation project managers face in the real world, we have no formal authority over the team. But one thing we can do is decompose the work breakdown structure into a measured definition of success on each deliverable. No matter how limited our authority over the team, we can still follow best practices on the WBS. We start from the overall project acceptance criteria which is a measurable definition of success. Then we continue the decomposition, identifying the major deliverables and defining success on each one in quantified terms. We don't want to have to guess about whether we produced the right deliverable; we want to be able to measure it at the end of the work. As an example, a task such as, "improve service on customer phone calls," is a typical "to do" list item that might be included in a work breakdown structure. It makes a terrible assignment and invites scope creep. On the other hand, if we decompose our deliverables properly, that work would have a metric defining success such as: "95% of the customers experience hold time of less than 15 seconds." It is difficult to come up with these measured outcomes primarily because we have to decide exactly what we want. However, the benefits are enormous in terms of more accurate estimating, more confident team members who know what success is before they start work and tighter control of the scope because the precision of these definitions helps us keep out unnecessary work.
“Installation/Uninstallation Testing”
Software Installation/Uninstallation Testing
Have you performed software installation testing? How was the experience? Well, Installation testing (Implementation Testing) is quite interesting part of software testing life cycle.
Installation testing is like introducing a guest in your home. The new guest should be properly introduced to all the family members in order to feel him comfortable. Installation of new software is also quite like above example.
If your installation is successful on the new system then customer will be definitely happy but what if things are completely opposite. If installation fails then our program will not work on that system not only this but can leave user’s system badly damaged. User might require to reinstall the full operating system.
In above case will you make any impression on user? Definitely not! Your first impression to make a loyal customer is ruined due to incomplete installation testing. What you need to do for a good first impression? Test the installer appropriately with combination of both manual and automated processes on different machines with different configuration. Major concerned of installation testing is Time! It requires lot of time to even execute a single test case. If you are going to test a big application installer then think about time required to perform such a many test cases on different configurations.
We will see different methods to perform manual installer testing and some basic guideline for automating the installation process.
To start installation testing first decide on how many different system configurations you want to test the installation. Prepare one basic hard disk drive. Format this HDD with most common or default file system, install most common operating system (Windows) on this HDD. Install some basic required components on this HDD. Each time create images of this base HDD and you can create other configurations on this base drive. Make one set of each configuration like Operating system and file format to be used for further testing.
How we can use automation in this process? Well make some systems dedicated for creating basic images (use software’s like Norton Ghost for creating exact images of operating system quickly) of base configuration. This will save your tremendous time in each test case. For example if time to install one OS with basic configuration is say 1 hour then for each test case on fresh OS you will require 1+ hour. But creating image of OS will hardly require 5 to 10 minutes and you will save approximately 40 to 50 minutes!
You can use one operating system with multiple attempts of installation of installer. Each time uninstalling the application and preparing the base state for next test case. Be careful here that your uninstallation program should be tested before and should be working fine.
Installation testing tips with some broad test cases:
1) Use flow diagrams to perform installation testing. Flow diagrams simplify our task. See example flow diagram for basic installation testing test case.
Add some more test cases on this basic flow chart Such as if our application is not the first release then try to add different logical installation paths.
2) If you have previously installed compact basic version of application then in next test case install the full application version on the same path as used for compact version.
3) If you are using flow diagram to test different files to be written on disk while installation then use the same flow diagram in reverse order to test uninstallation of all the installed files on disk.
4) Use flow diagrams to automate the testing efforts. It will be very easy to convert diagrams into automated scripts.
5) Test the installer scripts used for checking the required disk space. If installer is prompting required disk space 1MB, then make sure exactly 1MB is used or whether more disk space utilized during installation. If yes flag this as error.
6) Test disk space requirement on different file system format. Like FAT16 will require more space than efficient NTFS or FAT32 file systems.
7) If possible set a dedicated system for only creating disk images. As said above this will save your testing time.
8 ) Use distributed testing environment in order to carry out installation testing. Distributed environment simply save your time and you can effectively manage all the different test cases from a single machine. The good approach for this is to create a master machine, which will drive different slave machines on network. You can start installation simultaneously on different machine from the master system.
9) Try to automate the routine to test the number of files to be written on disk. You can maintain this file list to be written on disk in and excel sheet and can give this list as a input to automated script that will check each and every path to verify the correct installation.
10) Use software’s available freely in market to verify registry changes on successful installation. Verify the registry changes with your expected change list after installation.
11) Forcefully break the installation process in between. See the behavior of system and whether system recovers to its original state without any issues. You can test this “break of installation” on every installation step.
12) Disk space checking: This is the crucial checking in the installation-testing scenario. You can choose different manual and automated methods to do this checking. In manual methods you can check free disk space available on drive before installation and disk space reported by installer script to check whether installer is calculating and reporting disk space accurately. Check the disk space after the installation to verify accurate usage of installation disk space. Run various combination of disk space availability by using some tools to automatically making disk space full while installation. Check system behavior on low disk space conditions while installation.
13) As you check installation you can test for uninstallation also. Before each new iteration of installation make sure that all the files written to disk are removed after uninstallation. Some times uninstallation routine removes files from only last upgraded installation keeping the old version files untouched. Also check for rebooting option after uninstallation manually and forcefully not to reboot.
I have addressed many areas of manual as well as automated installation testing procedure. Still there are many areas you need to focus on depending on the complexity of your software under installation. These not addressed important tasks includes installation over the network, online installation, patch installation, Database checking on Installation, Shared DLL installation and uninstallation etc.
Have you performed software installation testing? How was the experience? Well, Installation testing (Implementation Testing) is quite interesting part of software testing life cycle.
Installation testing is like introducing a guest in your home. The new guest should be properly introduced to all the family members in order to feel him comfortable. Installation of new software is also quite like above example.
If your installation is successful on the new system then customer will be definitely happy but what if things are completely opposite. If installation fails then our program will not work on that system not only this but can leave user’s system badly damaged. User might require to reinstall the full operating system.
In above case will you make any impression on user? Definitely not! Your first impression to make a loyal customer is ruined due to incomplete installation testing. What you need to do for a good first impression? Test the installer appropriately with combination of both manual and automated processes on different machines with different configuration. Major concerned of installation testing is Time! It requires lot of time to even execute a single test case. If you are going to test a big application installer then think about time required to perform such a many test cases on different configurations.
We will see different methods to perform manual installer testing and some basic guideline for automating the installation process.
To start installation testing first decide on how many different system configurations you want to test the installation. Prepare one basic hard disk drive. Format this HDD with most common or default file system, install most common operating system (Windows) on this HDD. Install some basic required components on this HDD. Each time create images of this base HDD and you can create other configurations on this base drive. Make one set of each configuration like Operating system and file format to be used for further testing.
How we can use automation in this process? Well make some systems dedicated for creating basic images (use software’s like Norton Ghost for creating exact images of operating system quickly) of base configuration. This will save your tremendous time in each test case. For example if time to install one OS with basic configuration is say 1 hour then for each test case on fresh OS you will require 1+ hour. But creating image of OS will hardly require 5 to 10 minutes and you will save approximately 40 to 50 minutes!
You can use one operating system with multiple attempts of installation of installer. Each time uninstalling the application and preparing the base state for next test case. Be careful here that your uninstallation program should be tested before and should be working fine.
Installation testing tips with some broad test cases:
1) Use flow diagrams to perform installation testing. Flow diagrams simplify our task. See example flow diagram for basic installation testing test case.
Add some more test cases on this basic flow chart Such as if our application is not the first release then try to add different logical installation paths.
2) If you have previously installed compact basic version of application then in next test case install the full application version on the same path as used for compact version.
3) If you are using flow diagram to test different files to be written on disk while installation then use the same flow diagram in reverse order to test uninstallation of all the installed files on disk.
4) Use flow diagrams to automate the testing efforts. It will be very easy to convert diagrams into automated scripts.
5) Test the installer scripts used for checking the required disk space. If installer is prompting required disk space 1MB, then make sure exactly 1MB is used or whether more disk space utilized during installation. If yes flag this as error.
6) Test disk space requirement on different file system format. Like FAT16 will require more space than efficient NTFS or FAT32 file systems.
7) If possible set a dedicated system for only creating disk images. As said above this will save your testing time.
8 ) Use distributed testing environment in order to carry out installation testing. Distributed environment simply save your time and you can effectively manage all the different test cases from a single machine. The good approach for this is to create a master machine, which will drive different slave machines on network. You can start installation simultaneously on different machine from the master system.
9) Try to automate the routine to test the number of files to be written on disk. You can maintain this file list to be written on disk in and excel sheet and can give this list as a input to automated script that will check each and every path to verify the correct installation.
10) Use software’s available freely in market to verify registry changes on successful installation. Verify the registry changes with your expected change list after installation.
11) Forcefully break the installation process in between. See the behavior of system and whether system recovers to its original state without any issues. You can test this “break of installation” on every installation step.
12) Disk space checking: This is the crucial checking in the installation-testing scenario. You can choose different manual and automated methods to do this checking. In manual methods you can check free disk space available on drive before installation and disk space reported by installer script to check whether installer is calculating and reporting disk space accurately. Check the disk space after the installation to verify accurate usage of installation disk space. Run various combination of disk space availability by using some tools to automatically making disk space full while installation. Check system behavior on low disk space conditions while installation.
13) As you check installation you can test for uninstallation also. Before each new iteration of installation make sure that all the files written to disk are removed after uninstallation. Some times uninstallation routine removes files from only last upgraded installation keeping the old version files untouched. Also check for rebooting option after uninstallation manually and forcefully not to reboot.
I have addressed many areas of manual as well as automated installation testing procedure. Still there are many areas you need to focus on depending on the complexity of your software under installation. These not addressed important tasks includes installation over the network, online installation, patch installation, Database checking on Installation, Shared DLL installation and uninstallation etc.
Subscribe to:
Posts (Atom)