Wednesday, December 12, 2007

The top 50 PHP editors

The following is a list of the top 50 PHP editors (commercial and freeware), with reviewed links to php-editors.com:

Editor Name

Platform/OS

PHP Edit

Windows

Dreamweaver

Windows

NuSphere PhpED

Windows
Linux

Maguma Workbench

Windows
Linux
Mac

emacs

Windows
Unix
Linux
Mac
Other

ActiveState Komodo

Windows
Unix
Linux
Other

PHP Designer 2005

Windows

Komodo

Windows
Linux
Other

TSW WebCoder 2005

Windows

VIM

Windows
Unix
Linux

DzSoft PHP Editor

Windows

Davor’s PHP Constructor

Windows

Edit Plus

Windows

HTML-Kit

Windows

PHP Expert Editor

Windows

Anjuta

Unix
Linux

Bluefish

Linux

Quanta Plus

Linux

Zend Studio

Windows
Unix
Linux
Mac
Other

Kate

Linux

Maguma Studio Free

Windows

PHP Editor by EngInSite

Windows

PHP Eclipse

Unix
Linux

Xored:: WebStudio

Windows
Unix
Linux
Other

SciTE

Windows
Unix
Linux
Other

VS.Php

Windows

Maguma Studio Pro

Windows

Macromedia HomeSite

Windows

TextPad

Windows

PHP Edit

Windows

EngInSite Editor for PHP

Windows

BBedit

Mac

BBedit Lite

Mac

Cooledit

Nedit

Unix
Linux

PSPad

Windows

PHP Coder

Windows

AceHTML Pro

Windows

Top PHP Studio

Windows

jEdit

Windows
Unix
Linux
Mac
Other

SubEthaEdit

Mac

umdev

Windows

Dev-PHP

Windows

Crimson Editor

Windows

PHP Processor

Windows

tsWebEditor

Windows

Svoi.NET - PHP Edit XP

Windows

ConTEXT

Windows

PHP Side (Simple IDE)

Windows
Unix
Linux

HAPedit

Windows

EmEditor

Windows

Roadsend Studio

Windows
Unix
Linux

TruStudio

Windows
Unix
Linux
Mac

Smultron

Mac

PHP backend generator

Windows
Unix
Linux
Mac
Other

PHPMaker

Windows

Pidela

Windows
Unix
Linux
Mac

Arisesoft Winsyntax

Windows

SEG

Windows

Sunday, December 2, 2007

Top 5 Free Shopping carts APP

1) Storesprite

  • printable receipts
  • order tracking
  • discount/gift codes
  • feature list found here

Download here (commercial and open source versions available)

2) Cart97 Pro

  • search engine friendly URLS
  • template driven
  • feature list found here

Download here (commercial and open source versions available)

3) Cubecart

  • free version requires original copyright information
  • has ability to sell digital and physical goods
  • mutiple currency support
  • feature list found here

Download here (commercial and open source versions available)

4) Zen cart

  • based on Oscommerce
  • many template providers such as templatemonster.com support it
  • feature list found here

Download here

5) AgoraCart Pro

  • written in perl
  • shipping modules: ups, fedex, and USmail
  • supports many payment processors
  • feature list found here

Download here (commercial and open source versions available)

Saturday, December 1, 2007

How to write a software requirements specification

How to write a software requirements specification
by Robert Japenga

What Makes a Great Software Requirements Specification?

There are many good definitions of System and Software Requirements Specifications that will provide us a good basis upon which we can both define a great specification and help us identify deficiencies in our past efforts. There is also a lot of great stuff on the web about writing good specifications. The problem is not lack of knowledge about how to create a correctly formatted specification or even what should go into the specification. The problem is that we don't follow the definitions out there.

We have to keep in mind that the goal is not to create great specifications but to create great products and great software. Can you create a great product without a great specification? Absolutely! You can also make your first million through the lottery – but why take your chances? Systems and software these days are so complex that to embark on the design before knowing what you are going to build is foolish and risky.

The IEEE (http://www.ieee.org/) is an excellent source for definitions of System and Software Specifications. As designers of real-time, embedded system software, we use IEEE STD 830-1998 as the basis for all of our Software Specifications unless specifically requested by our clients. Essential to having a great Software Specification is having a great System Specification. The equivalent IEEE standard for that is IEEE STD 1233-1998. However, for most purposes in smaller systems, the same templates can be used for both.

What are the benefits of a Great SRS?

The IEEE 830 standard defines the benefits of a good SRS:
Establish the basis for agreement between the customers and the suppliers on what the software product is to do. The complete description of the functions to be performed by the software specified in the SRS will assist the potential users to determine if the software specified meets their needs or how the software must be modified to meet their needs. [NOTE: We use it as the basis of our contract with our clients all the time].

Reduce the development effort. The preparation of the SRS forces the various concerned groups in the customer’s organization to consider rigorously all of the requirements before design begins and reduces later redesign, recoding, and retesting. Careful review of the requirements in the SRS can reveal omissions, misunderstandings, and inconsistencies early in the development cycle when these problems are easier to correct.

Provide a basis for estimating costs and schedules. The description of the product to be developed as given in the SRS is a realistic basis for estimating project costs and can be used to obtain approval for bids or price estimates. [NOTE: Again, we use the SRS as the basis for our fixed price estimates]

Provide a baseline for validation and verification. Organizations can develop their validation and Verification plans much more productively from a good SRS. As a part of the development contract, the SRS provides a baseline against which compliance can be measured. [NOTE: We use the SRS to create the Test Plan].

Facilitate transfer.The SRS makes it easier to transfer the software product to new users or new machines. Customers thus find it easier to transfer the software to other parts of their organization, and suppliers find it easier to transfer it to new customers.

Serve as a basis for enhancement. Because the SRS discusses the product but not the project that developed it, the SRS serves as a basis for later enhancement of the finished product. The SRS may need to be altered, but it does provide a foundation for continued production evaluation. [NOTE: This is often a major pitfall – when the SRS is not continually updated with changes]

What should the SRS address?

Again from the IEEE standard:

The basic issues that the SRS writer(s) shall address are the following:

a) Functionality. What is the software supposed to do?
b) External interfaces. How does the software interact with people, the system’s hardware, other hardware, and other software?
c) Performance. What is the speed, availability, response time, recovery time of various software functions, etc.?
d) Attributes. What are the portability, correctness, maintainability, security, etc. considerations?
e) Design constraints imposed on an implementation. Are there any required standards in effect, implementation language, policies for database integrity, resource limits, operating environment(s) etc.?

What are the characteristics of a great SRS?

Again from the IEEE standard:
An SRS should be
a) Correct
b) Unambiguous
c) Complete
d) Consistent
e) Ranked for importance and/or stability
f) Verifiable
g) Modifiable
h) Traceable

Correct - This is like motherhood and apple pie. Of course you want the specification to be correct. No one writes a specification that they know is incorrect. We like to say - "Correct and Ever Correcting." The discipline is keeping the specification up to date when you find things that are not correct.

Unambiguous -
An SRS is unambiguous if, and only if, every requirement stated therein has only one interpretation. Again, easier said than done. Spending time on this area prior to releasing the SRS can be a waste of time. But as you find ambiguities - fix them.

Complete -
A simple judge of this is that is should be all that is needed by the software designers to create the software.

Consistent -
The SRS should be consistent within itself and consistent to its reference documents. If you call an input "Start and Stop" in one place, don't call it "Start/Stop" in another.

Ranked for Importance -
Very often a new system has requirements that are really marketing wish lists. Some may not be achievable. It is useful provide this information in the SRS.

Verifiable -
Don't put in requirements like - "It should provide the user a fast response." Another of my favorites is - "The system should never crash." Instead, provide a quantitative requirement like: "Every key stroke should provide a user response within 100 milliseconds."

Modifiable -
Having the same requirement in more than one place may not be wrong - but tends to make the document not maintainable.

Traceable -
Often, this is not important in a non-politicized environment. However, in most organizations, it is sometimes useful to connect the requirements in the SRS to a higher level document. Why do we need this requirement?

What is the difference between a System Specification and a Software Specification?

Very often we find that companies do not understand the difference between a System specification and a Software Specification. Important issues are not defined up front and Mechanical, Electronic and Software designers do not really know what their requirements are.

The following is a high level list of requirements that should be addressed in a System Specification:

  • Define the functions of the system
  • Define the Hardware / Software Functional Partitioning
  • Define the Performance Specification
  • Define the Hardware / Software Performance Partitioning
  • Define Safety Requirements
  • Define the User Interface (A good user’s manual is often an overlooked part of the System specification. Many of our customers haven’t even considered that this is the right time to write the user’s manual.)
  • Provide Installation Drawings/Instructions.
  • Provide Interface Control Drawings (ICD’s, External I/O)

One job of the System specification is to define the full functionality of the system. In many systems we work on, some functionality is performed in hardware and some in software. It is the job of the System specification to define the full functionality and like the performance requirements, to set in motion the trade-offs and preliminary design studies to allocate these functions to the different disciplines (mechanical, electrical, software).

Another function of the System specification is to specify performance. For example, if the System is required to move a mechanism to a particular position accurate to a repeatability of ± 1 millimeter, that is a System’s requirement. Some portion of that repeatability specification will belong to the mechanical hardware, some to the servo amplifier and electronics and some to the software. It is the job of the System specification to provide that requirement and to set in motion the partitioning between mechanical hardware, electronics, and software. Very often the System specification will leave this partitioning until later when you learn more about the system and certain factors are traded off (For example, if we do this in software we would need to run the processor clock at 40 mHz. However, if we did this function in hardware, we could run the processor clock at 12 mHz). [This implies that a certain level of research or even prototyping and benchmarking needs to be done to create a System spec. I think it is useful to say that explicitly.]

However, for all practical purposes, most of the systems we are involved with in small to medium size companies, combine the software and the systems documents. This is done primarily because most of the complexity is in the software. When the hardware is used to meet a functional requirement, it often is something that the software wants to be well documented. Very often, the software is called upon to meet the system requirement with the hardware you have. Very often, there is not a systems department to drive the project and the software engineers become the systems engineers. For small projects, this is workable even if not ideal. In this case, the specification should make clear which requirements are software, which are hardware, and which are mechanical.

What is the difference between a design requirement and software requirement?

In short, the SRS should not include any design requirements. However, this is a difficult discipline. For example, because of the partitioning and the particular RTOS you are using, and the particular hardware you are using, you may require that no task use more than 1 ms of processing prior to releasing control back to the RTOS. Although that may be a true requirement and it involves software and should be tested – it is truly a design requirement and should be included in the Software Design Document or in the Source code.
Consider the target audience for each specification to identify what goes into what documents.

Marketing/Product Management
Creates a product specification and gives it to Systems. It should define everything Systems needs to specify the product

Systems

Creates a System Specification and gives it to Systems/Software and Mechanical and Electrical Design.

Systems/Software

Creates a Software Specification and gives it to Software. It should define everything Software needs to develop the software.
Thus, the SRS should define everything explicitly or (preferably) by reference that software needs to develop the software. References should include the version number of the target document. Also, consider using master document tools which allow you to include other documents and easily access the full requirements.

Is this do-able? Won’t we miss our deadlines if we take the time to do this?

This is a great question. There is no question that there is balance in this process. We have seen companies and individuals go overboard on documenting software that doesn’t need to be documented, such as a temporary utility. We have also seen customers kill good products by spending too much time specifying it.

However, the bigger problem is at the other end of the spectrum. We have found that taking the time up front pays dividends down stream. If you don’t have time to specify it up front, you probably don’t have the time to do the project.

Here are some of our guidelines:

  • Spend time specifying and documenting well software that you plan to keep.
  • Keep documentation to a minimum when the software will only be used for a short time or has a limited number of users.
  • Have separate individuals write the specifications (not the individual who will write the code).
  • The person to write the specification should have good communication skills.
  • Pretty diagrams can help but often tables and charts are easier to maintain and can communicate the same requirements.
  • Take your time with complicated requirements. Vagueness in those areas will come back to bite you later.
  • Conversely, watch out for over-documenting those functions that are well understood by many people but for which you can create some great requirements.
  • Keep the SRS up to date as you make changes.
  • Approximately 20-25% of the project time should be allocated to requirements definition.
  • Keep 5% of the project time for updating the requirements after the design has begun.
  • Test the requirements document by using it as the basis for writing the test plan.

Monday, November 12, 2007

Planning PHP Projects

Planning PHP Projects

The biggest problem I see is most PHP scripts is a lack of structure and organization. What heralds this is a lack of planning. Many PHP programmers don't think through projects before they dive into the coding. This is a tutorial I wrote 3-4 months ago. I have learned a lot since then, especially concerning OOP techniques. I am currently working on a new version, however, I considered it complete enough to be posted here. Please comment so that the next revision of it will improve!

Planning?

When coding a large, or even moderate or small size, project the most important thing is planning. When you get to the level in which you will, without a tutorial or book, be programming something on your own you know you can do the coding. You know you can connect to databases, print things to the screen, create basic classes, etc. However, what many people do when they reach this stage is skip the planning process and dive right into the coding procedure.
While improvisation during programming is good, if you want a streamlined application that shows consisten programming practices, is easy to update, and is simple to code, the best tool is a plan. If you have a database structure, if you have outlined the structure of your code, if you know exactly what you want your application to do, programming is a breeze. Succesful planning is not treated in most books or tutorials, because in the books and tutorials the author does the planning for you. But once you are creating an application on your own, you must plan and look ahead.
Hopefully, this tutorial will help you understand how to plan an application. It will be, eventually, a two or three part series that will address planning features, planning classes, database structure, templating, message abstraction, and hopefully some basic coding technique. It is not intended to teach you how to code PHP, instead it is intended to show you how to structure and plan your code well.
Enjoy! I have utilized these practices while creating programs, and when it comes to adding features and creating new versions, planning and good code structure is the most important part of the process.

Planning the Program Features

The basic need is to understand what your program will be doing. The example I will use throughout is a basic guestbook application. Initially, what you want to do is figure out the basic features that will contain everything else. For a guestbook application, I would probably outline something like this:
1.) The ability to view entries 2.) The ability to post entries 3.) Administration 4.) Templating
How did I arrive at these features? I looked at other guestbooks and saw what they had and tried to categorize it. I then looked to see what they didn't have that I would like and added that in and categorized it. The whole idea is just to have the roughest outline of the most basic features.
Now we can try to flesh it out. Brainstorm what you think a guestbook (or whatever your application is) should have. I figured out something like this:

  1. Viewing entries
    1. Default view is 10 entries at a time, but allow people to view more
    2. Maybe JavaScript expanding and collapsing of entries?
    3. Transfer BBCode to HTML
  2. Posting entries
    1. Place a link on every page to the posting page
    2. Required fields: name, entry
    3. Optional fields: e-mail, website, IM usernames, location
    4. Hidden fields: IP address (for banning spammers)
  3. Administration
    1. Editing posts.
    2. Deleting posts.
    3. Banning people by IP (if someone consistently spams your guestbook)
    4. Bad word filter settings?
    5. Basic settings -- site name, guestbook name, site URL, database user/pass/database/host
    6. Admin response to entries
    7. Easy modification of templates and addition of new ones through the Admin panel
    8. Login and authentication, obviously
  4. Templating
    1. Basic replacing of {VARS} with values from DB or flatfile
    2. If you don't want to code it yourself, systems such as Smarty and patTemplate are excellent and easy to use

This is by no means complete, but can you see that once a list such as this is available, programming is made that much easier? When I begin to write the script, I can see exactly where I want to go. I know how the various features will connect, and I understand the purpose behind what I am doing.

Planning the Coding

Here is where you decide how the backend/coding will work. Do you want to run it off a database? If so, which one (MySQL, SQLite, PostGreSQL, MS Access, etc.). Or do you want to do it flatfile, if so, how? And in general, what will the structure of your code be? My advice for large applications can be summed up in three words: Object Oriented Programming.
The best way to program a large PHP program -- and indeed any large program (and in fact many small ones) -- is to use OOP. Think of it this way. Say you have a forum script and it is OOP. You have a class named view. Inside that, you have a function called viewThreads(); that accepts a few different values to tell it which threads to view. Now, suppose you have a page titled viewforum.php and at one point you want to view certain threads. So you call $view->viewThreads(); . You also have a search function, and in it you want to view certain threads. Well, you call the same function. Now say you want to change the programming for viewing threads. If you didn't have OOP, you'd have to open up every file that views threads and change it. With OOP, you just change the class. You could just do a bunch of functions, but the use of OOP is that a class is more organized, can do something automatically every time it is called, and can be easily extended. (Not to mention more advanced class passing and handling functions not introduced until PHP 5)
So, you've decided to use classes. Or rather, I've decided that you will at least for this tutorial. What classes will you have, and what will each do? If you are going to use a database you might want a database class to handle connections. Then you'll want an entry class that handles viewing and adding entries. An administration class to handle administration, and a templating class. I would recommend, for a large project, to use a templating class like Smarty or patTemplate. These will save time and add powerful features you may not want to spend time programming yourself. The point is that you are coding a guestbook, not a template system. So use a template system, and get on with the guestbook. (Or forum, or CMS, or whatever.)
After you've decided on your classes, a good idea is to outline the classes and the functions in them. I'll give you an example here:
PHP Example: (!)

class Entry {

function Entry() {
//This function will include and initialize a global variable containing the database class
//Notice the class is named the same as this function. That means this function will run
//when the class is called
}

function view( $num, $start ) {
//This class will access the database and get the number of entries specified in $num, starting at the entry with the ID of
//$start
}

function post( $name, $email, $website, $aim, $yim, $msn, $icq, $title, $post ) {
//This class takes the information placed in it and adds it to the MySQL database
}

}

Obviously, more functions are needed for the class to work, and the coding is yet to be done, but the basic structure is there. Do this for all your classes and the work will be greatly simplified.
I haven't done everything necessary for planning -- there are still classes to plan out and stuff like that -- but this is an example of how it should be done. For your CMS or forum, or -- indeed -- guestbook, using this type of planning and structure will make your life easier, your coding better, and is the way to go if you are coding even a relatively smallish project.

Coding the Program -- Methods to make Coding Easier

Templating

It is extremely important to start out with a template and templating system. Everything you code will be built around this, so don't leave it to the end. I would recommend using the Smarty templating system, but if that fails go for patTemplate (another good system). But whatever you choose, or if you code it yourself, get it working before anything else. Because the template system is the main thing the end user will see and use, you need it to work. Not much about the structure, but one thing I would recommend doing is creating a template management class. Not a templating class that does the actual work, but one that manages it. So, say you are using Smarty, you would do something like:

PHP Example: (!)

class Template {

function Template( ) {
require_once("Smarty.php");
$smarty = new Smarty();
}
function showTpl( $tpl ) {
$smarty->display( $tpl );
}
}

You could do more, but even just this means that you can easily include the beginnings and ending of the Smarty code into every page without having to write it over and over.

Abstraction

What's abstraction? Well, it could be used as in "database abstraction", which means that this is a script that lets you use tons of different database utilities (MySQL, SQLite, PostGreSQL, etc.) without changing your code. But it also means something else. It means easy coding -- or at least easier. Look at it this way -- suppose you want something all over your pages to be the same. Well, either you can just write everything exactly the same over and over, or you can create another page that has the values you need and can be included into each page. The basic - or easiest - method of doing this is seen in sites that have several pages -- navbar.php, header.php, footer.php, etc. -- and include those pages into one that has the content. This type of thing is rendered obsolete by use of Smarty -- but you can go further -- especially for a large program.
One way is to create a messages class. "What messages?" you say. Well, whenever you do a program at some point you'll want to show a message -- whether it's "Thank you, your guestbook entry has been added to the database" or "Sorry, wrong username" -- and having a file like this will make it easier to have the same messages appear between scripts. An example script of this type would be:
PHP Example: (!)


class msg {

function msg( $num ) {
$start = "<p style="font-color:red">";
$end = "</p>";
$message = $start;
switch( $num ) {
case 1:
$message .= "Wrong username";
break;

case 2:
$message .= "Wrong pass";
break;
}
$message .= $end;
echo $message;
}
}

If you have a login page, you would access it like follows:
PHP Example: (!)

include "msg.php";
if( $pass != "arr" ) {
$msg = new msg( "2" );
}
elseif( $user != "user" ) {
$msg = new msg( "1" );
}

Obviously, that is not the correct way to create a login system, but you see how the message system could help. I included it as an example of a type of system that makes coding easier. So wherever you have a login, or somewhere where someone has to supply a username or password, just use the class and if you want to change the wording everywhere you have just one file to edit!
Not only that, if you need a new message, just add a new case inside the class, and then call it as you would the others. No need for echos, s, etc. If you are using a template system you can't put the templating for the message inside the class as I did, but if your template system supports this you can create a hidden sub-template that gets shown if the message class is called. Just 3 more lines of code in the message class is necessary for this using the Smarty system.
Now do you start to see what I mean by abstraction?
The only other important thing to cover in this is structure. Indent everything. Add a tab for each function in a class, and a tab for all the code inside that function, loop, or if statement.
And. . . COMMENT. Comment everything. You may think you'll know what you meant by a certain bit of code, but after a couple of months when the time comes to update. . .you'll be happier if you commented everything.

System!

If you look at this tutorial, you'll see my biggest point. System makes coding easier. If you have a system, anything can work, and probably will work much more easily. De-bugging is easier if you have different classes, and if you use templating systems along with classes your actual code will look cleaner and will update more easily.
Establish ways that you do certain things, and do things that way in every piece of code. Ever since I started thinking about this and programming using classes and tools like this my coding has taking huge leaps and bounds. Coding sloppily, on the other hand, may take less time initially (cutting out the planning and classes and suchlike), but when you go back to add new features, change a message, or change the template, you will pay for it.
If you are interested, try taking a simple application, and if you want to, try a bit of coding within the OOP environment using Smarty and a message class that employs Smarty's hidden/shown features. Just try out what I've outlined and see what you think. I practically guarantee that it will be much simpler after the planning stage to start programming, and that using classes for everything will make your life tons easier. If you don't know how to program OOP, check out a few tutorials (I will be writing one soon myself). It is worth the time.
There's obviously more to it that this, however, this tutorial covers some of the basic methods of planning. If you want to know how to design a database, look at the LAMP tutorial on this website. You can even find a tutorial here on message abstraction. However you do it, planning and system help. Hope this tutorial helped you

Windows XP Multiuser Remote Desktop

An interesting feature, on Windows XP, is the ability to be remote controlled from a second PC: the so called “Remote Desktop Connection” can be used from a dial-up connection or in a local ethernet network.

XP (and Media Center Edition), differently than the Server versions of Windows, has a limit: a single PC can be controlled by a single “local” user (the “real” person on place), OR a single “remote” user. If someone logs into the computer from remote, the local user is disconnected. The following procedure deactivates this block and allows multiple persons to connect and to use a single computer from remote.

Very useful, for example, if you’ve a very strong PC and you want your wife/friend/brother to use an old computer like a “terminal” to use applications on the new one, at the same time of you. Other application of the same technique: you’re at work and you want to connect to your home PC, without blocking your wife that is using the same computer to check email

UPDATE: it seems that XP is limited, also after this modification, to 3 concurrent users. So don’t waste time trying to raise the maximum number of connections over three (see step 5) because, at this time, I don’t think there’s a way to use the same XP PC with more than 3 persons at the same time (e.g. a local user and 2 remote users).

This procedure is an “hack”: do it at your own risk:


STEP 1

  • Start your Windows in Safe Mode (tap on F8 first of the Windows Loading Splash Screen);
  • click on “My Computer” with right mouse button and choose “Properties”;
  • go to “Remote” tab and uncheck “Allow users to connect remotely to this computer” (if it’s already unchecked, just do nothing);
  • click OK.
STEP 2
  • Go to Start -> Control Panel;
  • open “Administrative Tools” and then “Services”;
  • double click “Terminal Services”, in the list;
  • choose “Disabled” for “Startup Type” option;
  • click OK.
STEP 3
  • Go to C:\windows\system32\dllcache;
  • rename the termsrv.dll file to termsrv.original or another name you like;
  • copy into the folder this unrestricted old version of termsrv.dll;
  • go to C:\windows\system32 (the upper folder of the current one);
  • do the same operation: rename termserv.dll also here, and put another copy of the file I linked above.
STEP 4
  • Click Start, then “Run…”, type “regedit” (without quotes) and press ENTER;
  • navigate in the Windows Registry Tree to reach this path:
  • HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\Licensing Core;
  • click with right mouse button on blank space in the right part of the registry window, choose “New” > DWORD, name the new key “EnableConcurrentSessions” (without quotes), then edit it and set its value to 1;
  • close the editor.
STEP 5
  • Click Start, then “Run…”, type “gpedit.msc” (without quotes) and press ENTER;
  • open Computer Configuration > Administrative Templates > Windows Components > Terminal Services;
  • double click “Limit number of connections”, choose “Enabled” and set the maximum number of concurrent connections you want to allow (2 or more), then Restart Windows in normal mode.
STEP 6
  • Go back to Remote tab of My Computer’s properties (see step 1) and activate “Allow users to connect remotely to this computer”;
  • Go back to “Terminal services” in “Services” (see step 2) and set its “Startup type” to “Manual”
  • Now restart Windows. Your operating system should be ready to accept multiple remote desktop connections
  • Remember that you’ve to prepare different Windows Users for every “phisical” user that want to connect to your desktop, to autenticate with separate logins/passwords.
  • User accounts configuration is reachable in the control panel, and the list of users that can connect to the PC is editable in the remote tab of My computer.

Thursday, November 8, 2007

PHP 5.2.5 and MySQL 5.1

PHP 5.2.5 Released

On 08-Nov-2007 in php.net

The PHP development team would like to announce the immediate availability of PHP 5.2.5. This release focuses on improving the stability of the PHP 5.2.x branch with over 60 bug fixes, several of which are security related. All users of PHP are encouraged to upgrade to this release.

Further details about the PHP 5.2.5 release can be found in the release announcement for 5.2.5, the full list of changes is available in the ChangeLog for PHP 5.

Complete Source Code

PHP 5.2.5 (tar.bz2) [7,591Kb] - 08 November 2007md5: 1fe14ca892460b09f06729941a1bb605
PHP 5.2.5 (tar.gz) [9,739Kb] - 08 November 2007md5: 61a0e1661b70760acc77bc4841900b7a

Windows Binaries

PHP 5.2.5 zip package [9,713Kb] - 08 November 2007md5: a1e31c0d872ab030a2256b1cd6d3b7d1
PHP 5.2.5 installer [19,803Kb] - 15 November 2007md5: f9396b654721d9a18c95ea6412c3d54e

Note: Updated due to problems with the original installer for this release.

PECL 5.2.5 Win32 binaries [2,879Kb] - 08 November 2007md5: a3553b61c9332d08a5044cf9bf89f2df
PHP 5.2.5 Non-thread-safe Win32 binaries [9,619Kb] - 08 November 2007md5: 41ef1582f43cfdb6e546a626b9ef93d6
PECL 5.2.5 Non-thread-safe Win32 binaries [4,114Kb] - 08 November 2007md5: 6e5ac694907b4aae080b2c9b6e83748a

Note: (Most of these PECL extension files come standard with the PHP 4 Windows binaries, but have since been moved into this separate PECL download. Files such as php_pdf.dll, php_ssh2.dll, etc.)

MySQL 5.1 Downloads

MySQL Community Edition

MySQL Community Edition is a freely downloadable version of the world's most popular open source database that is supported by an active community of open source developers and enthusiasts.MySQL 5.1 Community Edition - Release Candidate Development Release

Click Here to download MySQL 5.1 Community Edition

NOTE: This release candidate release, as any other pre-production release, should not be installed on production level systems or systems with critical data. It is good practice to back up your data before installing any new version of software. Although MySQL has worked very hard to ensure a high level of quality, protect your data by making a backup as you would for any other pre release software. MySQL generally recommends that you dump and reload your tables from any previous version to upgrade to 5.1.

Thursday, November 1, 2007

What Is Web 2.0

Design Patterns and Business Models for the Next Generation of Software

The bursting of the dot-com bubble in the fall of 2001 marked a turning point for the web. Many people concluded that the web was overhyped, when in fact bubbles and consequent shakeouts appear to be a common feature of all technological revolutions. Shakeouts typically mark the point at which an ascendant technology is ready to take its place at center stage. The pretenders are given the bum's rush, the real success stories show their strength, and there begins to be an understanding of what separates one from the other.
The concept of "Web 2.0" began with a conference brainstorming session between O'Reilly and MediaLive International. Dale Dougherty, web pioneer and O'Reilly VP, noted that far from having "crashed", the web was more important than ever, with exciting new applications and sites popping up with surprising regularity. What's more, the companies that had survived the collapse seemed to have some things in common. Could it be that the dot-com collapse marked some kind of turning point for the web, such that a call to action such as "Web 2.0" might make sense? We agreed that it did, and so the Web 2.0 Conference was born.
In the year and a half since, the term "Web 2.0" has clearly taken hold, with more than 9.5 million citations in Google. But there's still a huge amount of disagreement about just what Web 2.0 means, with some people decrying it as a meaningless marketing buzzword, and others accepting it as the new conventional wisdom.
This article is an attempt to clarify just what we mean by Web 2.0.
In our initial brainstorming, we formulated our sense of Web 2.0 by example:
Web 1.0Web 2.0
DoubleClick--&gt;Google AdSense
Ofoto--&gt;Flickr
Akamai--&gt;BitTorrent
mp3.com--&gt;Napster
Britannica Online--&gt;Wikipedia
personal websites--&gt;blogging
evite--&gt;upcoming.org and EVDB
domain name speculation--&gt;search engine optimization
page views--&gt;cost per click
screen scraping--&gt;web services
publishing--&gt;participation
content management systems--&gt;wikis
directories (taxonomy)--&gt;tagging ("folksonomy")
stickiness--&gt;syndication
The list went on and on. But what was it that made us identify one application or approach as "Web 1.0" and another as "Web 2.0"? (The question is particularly urgent because the Web 2.0 meme has become so widespread that companies are now pasting it on as a marketing buzzword, with no real understanding of just what it means. The question is particularly difficult because many of those buzzword-addicted startups are definitely not Web 2.0, while some of the applications we identified as Web 2.0, like Napster and BitTorrent, are not even properly web applications!) We began trying to tease out the principles that are demonstrated in one way or another by the success stories of web 1.0 and by the most interesting of the new applications.

1. The Web As Platform

Like many important concepts, Web 2.0 doesn't have a hard boundary, but rather, a gravitational core. You can visualize Web 2.0 as a set of principles and practices that tie together a veritable solar system of sites that demonstrate some or all of those principles, at a varying distance from that core.

Web2MemeMap

Figure 1 shows a "meme map" of Web 2.0 that was developed at a brainstorming session during FOO Camp, a conference at O'Reilly Media. It's very much a work in progress, but shows the many ideas that radiate out from the Web 2.0 core.
For example, at the first Web 2.0 conference, in October 2004, John Battelle and I listed a preliminary set of principles in our opening talk. The first of those principles was "The web as platform." Yet that was also a rallying cry of Web 1.0 darling Netscape, which went down in flames after a heated battle with Microsoft. What's more, two of our initial Web 1.0 exemplars, DoubleClick and Akamai, were both pioneers in treating the web as a platform. People don't often think of it as "web services", but in fact, ad serving was the first widely deployed web service, and the first widely deployed "mashup" (to use another term that has gained currency of late). Every banner ad is served as a seamless cooperation between two websites, delivering an integrated page to a reader on yet another computer. Akamai also treats the network as the platform, and at a deeper level of the stack, building a transparent caching and content delivery network that eases bandwidth congestion.
Nonetheless, these pioneers provided useful contrasts because later entrants have taken their solution to the same problem even further, understanding something deeper about the nature of the new platform. Both DoubleClick and Akamai were Web 2.0 pioneers, yet we can also see how it's possible to realize more of the possibilities by embracing additional Web 2.0 design patterns.
Let's drill down for a moment into each of these three cases, teasing out some of the essential elements of difference.

Netscape vs. Google

If Netscape was the standard bearer for Web 1.0, Google is most certainly the standard bearer for Web 2.0, if only because their respective IPOs were defining events for each era. So let's start with a comparison of these two companies and their positioning.
Netscape framed "the web as platform" in terms of the old software paradigm: their flagship product was the web browser, a desktop application, and their strategy was to use their dominance in the browser market to establish a market for high-priced server products. Control over standards for displaying content and applications in the browser would, in theory, give Netscape the kind of market power enjoyed by Microsoft in the PC market. Much like the "horseless carriage" framed the automobile as an extension of the familiar, Netscape promoted a "webtop" to replace the desktop, and planned to populate that webtop with information updates and applets pushed to the webtop by information providers who would purchase Netscape servers.
In the end, both web browsers and web servers turned out to be commodities, and value moved "up the stack" to services delivered over the web platform.
Google, by contrast, began its life as a native web application, never sold or packaged, but delivered as a service, with customers paying, directly or indirectly, for the use of that service. None of the trappings of the old software industry are present. No scheduled software releases, just continuous improvement. No licensing or sale, just usage. No porting to different platforms so that customers can run the software on their own equipment, just a massively scalable collection of commodity PCs running open source operating systems plus homegrown applications and utilities that no one outside the company ever gets to see.
At bottom, Google requires a competency that Netscape never needed: database management. Google isn't just a collection of software tools, it's a specialized database. Without the data, the tools are useless; without the software, the data is unmanageable. Software licensing and control over APIs--the lever of power in the previous era--is irrelevant because the software never need be distributed but only performed, and also because without the ability to collect and manage the data, the software is of little use. In fact, the value of the software is proportional to the scale and dynamism of the data it helps to manage.
Google's service is not a server--though it is delivered by a massive collection of internet servers--nor a browser--though it is experienced by the user within the browser. Nor does its flagship search service even host the content that it enables users to find. Much like a phone call, which happens not just on the phones at either end of the call, but on the network in between, Google happens in the space between browser and search engine and destination content server, as an enabler or middleman between the user and his or her online experience.
While both Netscape and Google could be described as software companies, it's clear that Netscape belonged to the same software world as Lotus, Microsoft, Oracle, SAP, and other companies that got their start in the 1980's software revolution, while Google's fellows are other internet applications like eBay, Amazon, Napster, and yes, DoubleClick and Akamai.

Wednesday, October 17, 2007

Project management

From Wikipedia, the free encyclopedia

Project management is the discipline of organizing and managing resources (e.g. people) in a way that the project is completed within defined scope, quality, time and cost constraints. A project is a temporary and one-time endeavor undertaken to create a unique product or service, which brings about beneficial change or added value. This property of being a temporary and one-time undertaking contrasts with processes, or operations, which are permanent or semi-permanent ongoing functional work to create the same product or service over and over again. The management of these two systems is often very different and requires varying technical skills and philosophy, hence requiring the development of project managements.

The first challenge of project management is to make sure that a project is delivered within defined constraints. The second, more ambitious challenge is the optimized allocation and integration of inputs needed to meet pre-defined objectives. A project is a carefully defined set of activities that use resources (money, people, materials, energy, space, provisions, communication, etc.) to meet the pre-defined objectives.

History of project management

As a discipline, project management developed from different fields of application including construction, engineering, and defense. In the United States, the forefather of project management is Henry Gantt, called the father of planning and control techniques, who is famously known for his use of the "Gantt" chart as a project management tool, for being an associate of Frederick Winslow Taylor's theories of scientific management[1], and for his study of the work and management of Navy ship building. His work is the forerunner to many modern project management tools including the work breakdown structure (WBS) and resource allocation.

The 1950s marked the beginning of the modern project management era. Again, in the United States, prior to the 1950s, projects were managed on an ad hoc basis using mostly Gantt Charts, and informal techniques and tools. At that time, two mathematical project scheduling models were developed: (1) the "Program Evaluation and Review Technique" or PERT, developed by Booz-Allen & Hamilton as part of the United States Navy's (in conjunction with the Lockheed Corporation) Polaris missile submarine program[2]; and (2) the "Critical Path Method" (CPM) developed in a joint venture by both DuPont Corporation and Remington Rand Corporation for managing plant maintenance projects. These mathematical techniques quickly spread into many private enterprises.

At the same time, technology for project cost estimating, cost management, and engineering economics was evolving, with pioneering work by Hans Lang and others. In 1956, the American Association of Cost Engineers (now AACE International; the Association for the Advancement of Cost Engineering) was formed by early practitioners of project management and the associated specialties of planning and scheduling, cost estimating, and cost/schedule control (project control). AACE has continued its pioneering work and in 2006 released the first ever integrated process for portfolio, program and project management(Total Cost Management Framework).

In 1969, the Project Management Institute (PMI) was formed to serve the interest of the project management industry. The premise of PMI is that the tools and techniques of project management are common even among the widespread application of projects from the software industry to the construction industry. In 1981, the PMI Board of Directors authorized the development of what has become A Guide to the Project Management Body of Knowledge (PMBOK Guide), containing the standards and guidelines of practice that are widely used throughout the profession. The International Project Management Association (IPMA), founded in Europe in 1967, has undergone a similar development and instituted the IPMA Competence Baseline (ICB). The focus of the ICB also begins with knowledge as a foundation, and adds considerations about relevant experience, interpersonal skills, and competence. Both organizations are now participating in the development of a ISO project management standard.

Definitions

  • PMBOK (Project Management Body of Knowledge as defined by the Project Management Institute - PMI):"Project management is the application of knowledge, skills, tools and techniques to project activities to meet project requirements."[3]
  • PRINCE2 project management methodology: "The planning, monitoring and control of all aspects of the project and the motivation of all those involved in it to achieve the project objectives on time and to the specified cost, quality and performance."[4]
  • PROJECT: A temporary piece of work with a finite end date undertaken to create a unique product or service. Projects bring form or function to ideas or needs.
  • DIN 69901 (Deutsches Institut für Normung - German Organization for Standardization): "Project management is the complete set of tasks, techniques, tools applied during project execution"

Job description

Project management is quite often the province and responsibility of an individual project manager. This individual seldom participates directly in the activities that produce the end result, but rather strives to maintain the progress and productive mutual interaction of various parties in such a way that overall risk of failure is reduced.

A project manager is often a client representative and has to determine and implement the exact needs of the client, based on knowledge of the firm he/she is representing. The ability to adapt to the various internal procedures of the contracting party, and to form close links with the nominated representatives, is essential in ensuring that the key issues of cost, time, quality, and above all, client satisfaction, can be realized.

In whatever field, a successful project manager must be able to envision the entire project from start to finish and to have the ability to ensure that this vision is realized.

Any type of product or service —buildings, vehicles, electronics, computer software, financial services, etc.— may have its implementation overseen by a project manager and its operations by a product manager.

The traditional triple constraints

Like any human undertaking, projects need to be performed and delivered under certain constraints. Traditionally, these constraints have been listed as scope, time, and cost[citation needed]. These are also referred to as the Project Management Triangle, where each side represents a constraint. One side of the triangle cannot be changed without impacting the others. A further refinement of the constraints separates product 'quality' or 'performance' from scope, and turns quality into a fourth constraint.

The Project Management Triangle
The Project Management Triangle

The diagram shown here, often a project management triangle focuses on the trade-off between time, cost, and quality (see also Project triangle where these aspects are called fast, cheap, and good). The deviation claimed to be traditional (without citation) bears the significant difference that quality as a combination of other aspects is at best something like "process quality", and certainly not product quality. Thus this diagram represents a biased (and often inappropriate) view of what quality is, from the perspective of self-assessment of project managers, who are in many company cultures often perceived as pursuing more their own career than actual product or project qualities. For this reason this section should be generally revised by an unbiased expert.

The time constraint refers to the amount of time available to complete a project. The cost constraint refers to the budgeted amount available for the project. The scope constraint refers to what must be done to produce the project's end result. These three constraints are often competing constraints: increased scope typically means increased time and increased cost, a tight time constraint could mean increased costs and reduced scope, and a tight budget could mean increased time and reduced scope.

The discipline of project management is about providing the tools and techniques that enable the project team (not just the project manager) to organize their work to meet these constraints.

Another approach to project management is to consider the three constraints as finance, time and human resources. If you need to finish a job in a shorter time, you can throw more people at the problem, which in turn will raise the cost of the project, unless by doing this task quicker we will reduce costs elsewhere in the project by an equal amount.

Time

For analytical purposes, the time required to produce a deliverable is estimated using several techniques. One method is to identify tasks needed to produce the deliverables documented in a work breakdown structure or WBS. The work effort for each task is estimated and those estimates are rolled up into the final deliverable estimate.

The tasks are also prioritized, dependencies between tasks are identified, and this information is documented in a project schedule. The dependencies between the tasks can affect the length of the overall project (dependency constrained), as can the availability of resources (resource constrained). Project Managers will often make a call to double down to prevent a project from breaking deadlines in the final stages of the implementation phase. Time is not considered a cost nor a resource since the project manager cannot control the rate at which it is expended. This makes it different from all other resources and cost categories. It should be remembered that no effort expended will have any higher quality than that of the effort- expenders.

Cost

Cost to develop a project depends on several variables including (chiefly): resource quantities, labor rates, material rates, risk management (i.e.cost contingency), Earned value management, plant (buildings, machines, etc.), equipment, cost escalation, indirect costs, and profit.

Scope

Requirements specified for the end result. The overall definition of what the project is supposed to accomplish, and a specific description of what the end result should be or accomplish. A major component of scope is the quality of the final product. The amount of time put into individual tasks determines the overall quality of the project. Some tasks may require a given amount of time to complete adequately, but given more time could be completed exceptionally. Over the course of a large project, quality can have a significant impact on time and cost (or vice versa).

Together, these three constraints have given rise to the phrase "On Time, On Spec, On Budget". In this case, the term "scope" is substituted with "spec(ification)".

Project management activities

Project management is composed of several different types of activities such as:

  1. Analysis & design of objectives and events
  2. Planning the work according to the objectives
  3. Assessing and controlling risk (or Risk Management)
  4. Estimating resources
  5. Allocation of resources
  6. Organizing the work
  7. Acquiring human and material resources
  8. Assigning tasks
  9. Directing activities
  10. Controlling project execution
  11. Tracking and reporting progress (Management information system)
  12. Analyzing the results based on the facts achieved
  13. Defining the products of the project
  14. Forecasting future trends in the project
  15. Quality Management
  16. Issues management
  17. Issue solving
  18. Defect prevention
  19. Identifying, managing & controlling changes
  20. Project closure (and project debrief)
  21. Communicating to stakeholders
  22. Increasing/ decreasing a company's workers

Project objectives

Project objectives define target status at the end of the project, reaching of which is considered necessary for the achievement of planned benefits. They can be formulated as S.M.A.R.T.

  • Specific,
  • Measurable (or at least evaluable) achievement,
  • Achievable (recently Acceptable is used regularly as well),
  • Realistic and
  • Time terminated (bounded).

The evaluation (measurement) occurs at the project closure. However a continuous guard on the project progress should be kept by monitoring and evaluating.

Project management artifacts

Most successful projects have one thing that is very evident - they were adequately documented, with clear objectives and deliverables.[citation needed] These documents are a mechanism to align sponsors, clients, and project team's expectations.

  1. Project Charter
  2. Preliminary Scope Statement / Statement of work
  3. Business case/Feasibility Study
  4. Scope Statement / Terms of reference
  5. Project management plan / Project Initiation Document
  6. Work Breakdown Structure
  7. Change Control Plan
  8. Risk Management Plan
  9. Risk Breakdown Structure
  10. Communications Plan
  11. Governance Model
  12. Risk Register
  13. Issue Log
  14. Action Item List
  15. Resource Management Plan
  16. Project Schedule
  17. Status Report
  18. Responsibility assignment matrix
  19. Database of lessons learned
  20. Stakeholder Analysis

These documents are normally hosted on a shared resource (i.e., intranet web page) and are available for review by the project's stakeholders (except for the Stakeholder Analysis, since this document comprises personal information regarding certain stakeholders. Only the Project Manager has access to this analysis). Changes or updates to these documents are explicitly outlined in the project's configuration management (or change control plan).

Project control variables

Project Management tries to gain control over variables such as risk:

Risk
Potential points of failure: Most negative risks (or potential failures) can be overcome or resolved, given enough planning capabilities, time, and resources. According to some definitions (including PMBOK Third Edition) risk can also be categorized as "positive--" meaning that there is a potential opportunity, e.g., complete the project faster than expected.

Customers (either internal or external project sponsors) and external organizations (such as government agencies and regulators) can dictate the extent of three variables: time, cost, and scope. The remaining variable (risk) is managed by the project team, ideally based on solid estimation and response planning techniques. Through a negotiation process among project stakeholders, an agreement defines the final objectives, in terms of time, cost, scope, and risk, usually in the form of a charter or contract.

To properly control these variables a good project manager has a depth of knowledge and experience in these four areas (time, cost, scope, and risk), and in six other areas as well: integration, communication, human resources, quality assurance, schedule development, and procurement.

Approaches

There are several approaches that can be taken to managing project activities including agile, interactive, incremental, and phased approaches.

Regardless of the approach employed, careful consideration needs to be given to clarify surrounding project objectives, goals, and importantly, the roles and responsibilities of all participants and stakeholders.

The traditional approach

A traditional phased approach identifies a sequence of steps to be completed. In the traditional approach, we can distinguish 5 components of a project (4 stages plus control) in the development of a project:

  1. project initiation stage;
  2. project planning or design stage;
  3. project execution or production stage;
  4. project monitoring and controlling systems;
  5. project completion stage.

Not all the projects will visit every stage as projects can be terminated before they reach completion. Some projects probably don't have the planning and/or the monitoring. Some projects will go through steps 2, 3 and 4 multiple times.

Many industries utilize variations on these stages. For example, in bricks and mortar architectural design, projects typically progress through stages like Pre-Planning, Conceptual Design, Schematic Design, Design Development, Construction Drawings (or Contract Documents), and Construction Administration. In software development, this approach is often known as 'waterfall development' i.e one series of tasks after another in linear sequence. In software development many organizations have adapted the Rational Unified Process (RUP) to fit this methodology, although RUP does not require or explicitly recommend this practice. Waterfall development can work for small tightly defined projects, but for larger projects of undefined or unknowable scope, it is less suited. Because software development is often the realization of a new or novel product, this method has been widely accepted as ineffective for software projects where requirements are largely unknowable up front and susceptible to change. While the names may differ from industry to industry, the actual stages typically follow common steps to problem solving--defining the problem, weighing options, choosing a path, implementation and evaluation.

Rational Unified Process

  1. Inception - Identify the initial scope of the project, a potential architecture for the system, and obtain initial project funding and stakeholder acceptance.
  2. Elaboration - Prove the architecture of the system.
  3. Construction - Build working software on a regular, incremental basis which meets the highest-priority needs of project stakeholders.
  4. Transition - Validate and deploy the system into the production environment

Temporary organization sequencing concepts

  1. Action-based entrepreneurship
  2. Fragmentation for commitment-building
  3. Planned isolation
  4. Institutionalised termination

Critical Chain

Critical chain is the application of the Theory of Constraints (TOC) to projects. The goal is to increase the rate of throughput (or completion rates) of projects in an organization. Applying the first three of the five focusing steps of TOC, the system constraint for all projects is identified as resources. To exploit the constraint, tasks on the critical chain are given priority over all other activities. Finally, projects are planned and managed to ensure that the critical chain tasks are ready to start as soon as the needed resources are available, subordinating all other resources to the critical chain.

For specific projects, the project plan is resource-leveled, and the longest sequence of resource-constrained tasks is identified as the critical chain. In multi-project environments, resource leveling should be performed across projects. However, it is often enough to identify (or simply select) a single "drum" resource—a resource that acts as a constraint across projects—and stagger projects based on the availability of that single resource.

Extreme Project Management

In critical studies of project management, it has been noted that several of these fundamentally PERT-based models are not well suited for the multi-project company environment of today. Most of them are aimed at very large-scale, one-time, non-routine projects, and nowadays all kinds of management are expressed in terms of projects. Using complex models for "projects" (or rather "tasks") spanning a few weeks has been proven to cause unnecessary costs and low maneuverability in several cases. Instead, project management experts try to identify different "lightweight" models, such as Extreme Programming for software development and Scrum techniques. The generalization of Extreme Programming to other kinds of projects is extreme project management, which may be used in combination with the process modeling and management principles of human interaction management.

Event chain methodology

Event chain methodology is the next advance beyond critical path method and critical chain project management.

Event chain methodology is an uncertainty modeling and schedule network analysis technique that is focused on identifying and managing events and event chains that affect project schedules. Event chain methodology helps to mitigate the negative impact of psychological heuristics and biases, as well as to allow for easy modeling of uncertainties in the project schedules. Event chain methodology is based on the following major principles.

  • Probabilistic moment of risk: An activity (task) in most real life processes is not a continuous uniform process. Tasks are affected by external events, which can occur at some point in the middle of the task.
  • Event chains: Events can cause other events, which will create event chains. These event chains can significantly affect the course of the project. Quantitative analysis is used to determine a cumulative effect of these event chains on the project schedule.
  • Critical events or event chains: The single events or the event chains that have the most potential to affect the projects are the “critical events” or “critical chains of events.” They can be determined by the analysis.
  • Project tracking with events: If a project is partially completed and data about the project duration, cost, and events occurred is available, it is possible to refine information about future potential events and helps to forecast future project performance.
  • Event chain visualization: Events and event chains can be visualized using event chain diagrams on a Gantt chart.

Process-based management

Also furthering the concept of project control is the incorporation of process-based management. This area has been driven by the use of Maturity models such as the CMMI (Capability Maturity Model Integration) and ISO/IEC15504 (SPICE - Software Process Improvement and Capability Determination), which have been far more successful.

Agile project management approaches based on the principles of human interaction management are founded on a process view of human collaboration. This contrasts sharply with traditional approach. In the agile software development or flexible product development approach, the project is seen as a series of relatively small tasks conceived and executed as the situation demands in an adaptive manner, rather than as a completely pre-planned process.

Project systems

As mentioned above, traditionally, project development includes five elements: control systems and four stages.

Project control systems

Project control is that element of a project that keeps it on-track, on-time, and within budget. Project control begins early in the project with planning and ends late in the project with post-implementation review, having a thorough involvement of each step in the process. Each project should be assessed for the appropriate level of control needed: too much control is too time consuming, too little control is too costly. If control is not implemented correctly, the cost to the business should be clarified in terms of errors, fixes, and additional audit fees. The practices of project control are part of the field of cost engineering.

Control systems are needed for cost, risk, quality, communication, time, change, procurement, and human resources. In addition, auditors should consider how important the projects are to the financial statements, how reliant the stakeholders are on controls, and how many controls exist. Auditors should review the development process and procedures for how they are implemented. The process of development and the quality of the final product may also be assessed if needed or requested. A business may want the auditing firm to be involved throughout the process to catch problems earlier on so that they can be fixed more easily. An auditor can serve as a controls consultant as part of the development team or as an independent auditor as part of an audit.

Businesses sometimes use formal systems development processes. These help assure that systems are developed successfully. A formal process is more effective in creating strong controls, and auditors should review this process to confirm that it is well designed and is followed in practice. A good formal systems development plan outlines:

  • A strategy to align development with the organization’s broader objectives
  • Standards for new systems
  • Project management policies for timing and budgeting
  • Procedures describing the process

Project development stages

Regardless of the methodology used, the project development process will have the same major stages: initiation, development, production or execution, and closing/maintenance.

Initiation

The initiation stage determines the nature and scope of the development. If this stage is not performed well, it is unlikely that the project will be successful in meeting the business’s needs. The key project controls needed here are an understanding of the business environment and making sure that all necessary controls are incorporated into the project. Any deficiencies should be reported and a recommendation should be made to fix them.

The initiation stage should include a cohesive plan that encompasses the following areas:

  • Study analyzing the business needs in measurable goals.
  • Review of the current operations.
  • Conceptual design of the operation of the final product.
  • Equipment requirement.
  • Financial analysis of the costs and benefits including a budget.
  • Select stake holders, including users, and support personnel for the project.
  • Project charter including costs, tasks, deliverables, and schedule.

Planning and design

After the initiation stage, the system is designed. Occasionally, a small prototype of the final product is built and tested. Testing is generally performed by a combination of testers and end users, and can occur after the prototype is built or concurrently. Controls should be in place that ensure that the final product will meet the specifications of the project charter. The results of the design stage should include a product design that:

  • Satisfies the project sponsor, end user, and business requirements.
  • Functions as it was intended.
  • Can be produced within quality standards.
  • Can be produced within time and budget constraints.

Closing and maintenance

Closing includes the formal acceptance of the project and the ending thereof. Administrative activities include the archiving of the files and documenting lessons learned.

Maintenance is an ongoing process, and it includes:

  • Continuing support of end users
  • Correction of errors
  • Updates of the software over time

In this stage, auditors should pay attention to how effectively and quickly user problems are resolved.

Over the course of any construction project, the work scope changes. Change is a normal and expected part of the construction process. Changes can be the result of necessary design modifications, differing site conditions, material availability, contractor-requested changes, value engineering and impacts from third parties, to name a few. Beyond executing the change in the field, the change normally needs to be documented to show what was actually constructed. Hence, the owner usually requires a final record to show all changes or, more specifically, any change that modifies the tangible portions of the finished work. The record is made on the contract documents – usually, but not necessarily limited to, the design drawings. The end product of this effort is what the industry terms as-built drawings, or more simply, “asbuilts.” The requirement for providing them is a norm in construction contracts.

Project management tools

Project management tools include

Project management associations

Several national and professional associations exist which have as their aim the promotion and development of project management and the project management profession. The most prominent associations include:

International standards

There have been several attempts to develop project management standards, such as:

Professional certifications