Wednesday, August 13, 2008

Recursive SQL User Defined Functions

In this article we will cover the new feature of SQL Server 2000 and the ability to create user defined functions. We will focus on creating recursive queries in this article to extend upon previous articles.
Being the developer for this directory, I had a need to perform several recursive methods. Any time you are creating a tree like data structure like directories, org charts, etc. You will need to perform some kind of recursion. In this example, I wanted to be able to provide a count of all descendents sites under a particular category as you see on WWWCoder.com when navigating the site. Each category contains a count of descendent sites of all categories under that specific category.
In the past there were several ways to perform this, one simple method was to create a field in the categories table that would get incremented each time a site was added. This was accomplished using a recursive method in the ASP.Net code that would create a new connection to the database each time a new record was added. It did eliminate the need to call a recursive function for each time a person requested a page to display the category navigation. Basically the method performed would accept a category id and a string value to increment or decrement the count contained in the site count field:

       
Public Sub UpdateParentCount(ByVal ParentID As Integer, ByVal AddSubtract As String)
 Dim SQLQ As String = "UPDATE Categories Set SiteCount = SiteCount " & AddSubtract & " 1"
 If AddSubtract = "+" Then 'add a new icon.
  SQLQ = SQLQ & ", DateSiteAdded = '" & Date.Today & "'"
 End If
 SQLQ = SQLQ & " WHERE CategoryID=" & ParentID
 Dim secondConnection As New SqlConnection(GetDBConnectionString)
 Dim secondCommand As New SqlCommand(SQLQ, secondConnection)
 secondCommand.CommandType = CommandType.Text
 secondConnection.Open()
 secondCommand.ExecuteNonQuery()
 secondConnection.Close()
 secondConnection = Nothing
 Dim SQLQ2 As String = "SELECT ParentID FROM Categories Where CategoryID = " & ParentID
 Dim myConnection As New SqlConnection(GetDBConnectionString)
 Dim myCommand As New SqlCommand(SQLQ2, myConnection)
 myConnection.Open()
 Dim result As SqlDataReader = myCommand.ExecuteReader(CommandBehavior.CloseConnection)
 If result.Read Then
  UpdateParentCount(result("ParentID"), AddSubtract)
 End If
 myConnection.Close()
 myConnection = Nothing
End Sub 
 
 

This wasn't too bad of solution, since I was able to eliminate the hits on the database for the count. The problem here is a count can become incorrect if any changes are performed outside of the application.
The solution for me had to meet the following requirements: reduce traffic between the Web server and the database server, and make sure an accurate count is always available in the database regardless of what modifies the records that it contains. In order to accomplish this, the count method was moved out of the ASP.Net code and into the database. SQL Server 2000 supports User Defined Functions that can be called from a stored procedure, in addition, the function can be recursive up to 32 levels. Since I can't see a need to go beyond 32 levels deep of categories, I opted to use the functions for creating the count. Here is an example of using a function from within a stored procedure:

ALTER procedure GetCategories
 
@ParentID   int
 
as
 
BEGIN
SELECT 
    CategoryID, CategoryName, Path, SiteCount, DateSiteAdded, 
    ParentID, SortColumn, dbo.CountChildren(CategoryID, 0) 
    As CulCount
FROM 
    Categories  
WHERE 
    ParentID = @ParentID
ORDER BY 
    SortColumn, CategoryName
END 
 

You'll notice in the SQL stored procedure's select statement there is a call to a function called CountChildren, in this function we pass the category id of the current category and the current cumulative count of the sites within the category.

ALTER FUNCTION dbo.CountChildren
(@id int, @cChildren int) 
RETURNS bigint 
AS
BEGIN
 
IF EXISTS (SELECT     
    Sites.SiteCatID
FROM         
    dbo.Categories 
INNER JOIN
    dbo.Sites 
ON 
    dbo.Categories.CategoryID = dbo.Sites.SiteCatID
WHERE 
    dbo.Categories.ParentID = @id OR dbo.Sites.SiteCatID = @id)
BEGIN 
   SET @cChildren = @cChildren + (
     SELECT 
        Count(SiteCatID) 
        FROM 
            Sites 
        WHERE 
            SiteCatID = @id AND SiteActive = 1)
  SELECT 
              @cChildren = dbo.CountChildren(CategoryID, @cChildren) 
            FROM 
              Categories 
            WHERE 
              ParentID = @id 
END 
  RETURN @cChildren 
END  
 
 

As you can see the function calls itself just as a recursive function in VB.Net would do, each time incrementing the cumulative count of all descendents of a particular category. In the end we have all the information generated on the SQL machine, and then it returns what we need without having to call a recursive method in the ASP.Net page and generate all the additional database calls over the network.
By: Patrick Santry, Microsoft MVP (ASP/ASP.NET), developer of this site, author of books on Web technologies, and member of the DotNetNuke core development team. If you're interested in the services provided by Patrick, visit his company Website at Santry.com.

Source: http://www.wwwcoder.com/

Recursive Sub-Procedures in ASP

This tutorial explains how to do recursive subroutines in ASP (VBScript). Use this algorithm to create threaded discussions, directories, or whatever use you have for it.

One of the best algorithms to know as a Web developer is how to code recursive sub procedures. What's recursion, and how does it help you as an ASP developer? Recursion allows you to code parent-child relationships. Parent-child relationships are the essence of directory tree-like structures. Trees allow you to develop threaded discussions, directory search engines (like santry.com), and perform lookups through your directories.

This process is done by calling a subroutine unto itself. What happens is the subroutine is initially called and then the subroutine calls itself until an end or condition is reached. Say for instance your doing a directory listing to list the contents of a directory, then each time the procedure encounters a new directory it calls itself to list the contents of the subdirectory, then the subroutine calls itself again to list the contents of directories in this sub directory and so on.


You can create this parent-child relationship in a database table. For example, the following table structure in a database:

 

RecordID
(autonumber)

ParentID

DisplayName

1

0

Topic 1

2

0

Topic 2

3

1

RE: Topic 1

4

1

RE: Topic 1

5

2

RE: RE: Topic 1

6

2

RE: RE: Topic 1



You can see from the above table that we have several records, some of the records have a 0 as a ParentID, meaning this is a top level parent record, then other records do have a value other than 0 in the ParentID, meaning they are children of the record that has a matching ID in the table. This structure is very versatile, in that you can have unlimited child records using this structure, thus allowing you to create as many nests or branches you wish. You should now see why this algorithm is very useful when it comes to the Web. This structure drives those threaded forums, directories and all the cool apps that you want to be able to create.


The next piece of code shows how to make the SQL call and then call the subroutine in order to display this structure to the user.
Set DBConn = Server.CreateObject("ADODB.Connection")
DBConn.Open "DSN=MyDSN"
'here we initially call the sub routine, we pass 0 as the parent ID
'this will pull all top level parent (meaning they don't have an 'ancestor).
'we also pass 0 for the level, this is used for spacing, or
'making the results appear threaded.
DoTree(0,0)
'----------------------------------------------------------
Sub DoTree(ParentID, intLevel)
Dim SQLQ, DBConn, rs, i
SQLQ = "SELECT RecordID, DisplayName FROM RECORDS " & _
      "WHERE ParentID = " & ParentID
       Set rs = DBConn.Execute(SQLQ)
       If Not rs.EOF Then
           Do Until rs.EOF
                 Response.Write "<img src=Spacer.gif Width= " & _
                 15 * intLevel & ">"
                 Response.Write rs("DisplayName") & "<br>"
'now call the subroutine we're in to see if this value has
'any children and increase the indent, and so on...    
                DoTree rs("RecordID"), intLevel + 1 
               rs.MoveNext
           Loop
       End If
       rs.Close
       Set rs = Nothing 
End Sub
'------------------------------------------------------------
DBConn.Close
Set DBConn= Nothing
'Once this routine is execute you should see results similiar to this:
Topic 1
    RE: Topic 1
        RE: RE: Topic 1
        RE: RE: Topic 1
    RE: Topic 1
Topic 2

So you can see from this example how you can use this to create a threaded type forum. You'll need to play around with the preceeding code a bit and find out how you can put it to use in your application.

Source: http://www.wwwcoder.com/

Recursive Functions in ASP

A function that calls itself repeatedly, satisfying some condition is called a Recursive Function. Using recursion, we split a complex problem into its single simplest case. The recursive function only knows how to solve that simplest case. You'll see the difference between solving a problem iteratively and recursively later.

Warning!

Beware! Use the recursion technique carefully, if you fail to handle it appropriately you can end up with a never-ending ASP application.

Same Old Factorial problem

Almost for every language, the beginners learning the concept of recursion are presented with the same old Factorial problem. I'll follow the traditional path, because I feel that it's the easiest way to understand the basic concept of recursion.

Understanding the problem

In mathematics, the factorial of a positive integer n, written as n! is represented by the following formula:

n! = n . (n-1) . (n-2) . (n-3) . Š. . 1

In plain english, factorial of a number is the product of that number with a number 1 less than that number, this multiplication process continues until the number which is being multiplied, eventually becomes equal to 1.

Iterative Solution

In the following function the value of 5! is calculated iteratively:

<%  
Response.Write iterativeFactorial (5)

Function iterativeFactorial ( intNumber )

Dim intCount, intFactorial

intFactorial = 1
For intCount = intNumber To 1 Step -1

intFactorial = intFactorial * intCount
Next
iterativeFactorial = intFactorial
End Function
%>

Recursive Solution

The following function represents the recursive solution for the factorial problem.

<%  
Response.Write recursiveFactorial (5)
Function recursiveFactorial ( intNumber )
If intNumber <= 1 Then
recursiveFactorial = 1 Else
recursiveFactorial = intNumber * recursiveFactorial ( intNumber - 1 )
End If
End Function
%>

The working of the above function recursiveFactorial is simple. If the integer argument "intNumber" is less than or equal to one, then the function returns 1, else the return value of the function is the product of intNumber with the value returned by the same function for integer argument "intNumber-1".

Answer of the above problem:

5! = 5 . (5-1) . (5-2) . (5-3) . (5-4)

Simplifying:

5! = 5 . 4 . 3 . 2. 1 = 120

Example: Folder Tree

The following example will display the tree structure for a specified folder.

<%
' Displays the Tree structure, replaces vbTab character with 3 HTML spaces

Response.Write Replace (DisplayFolderTree ("c:\windows", 0), vbTab, "   ")

Function DisplayFolderTree ( strFolder, intTreeLevel )

Dim objFileSystem, objFolder, objSubFolders, objTempFolder
Set objFileSystem = CreateObject("Scripting.FileSystemObject")

' Get the name of current folder
Set objFolder = objFileSystem.GetFolder ( strFolder )

' Get the list of subfolders
Set objSubFolders = objFolder.SubFolders

' Adds the name of current folder with proper indentation
' intTreeLevel is used only for indenting folder name according to its pos ition
DisplayFolderTree = DisplayFolderTree & String ( intTreeLevel, vbTab) & "•"
DisplayFolderTree = DisplayFolderTree & vbTab & objFolder.Name & "<br>"

' Recursion takes place here
' If any "subfolders exist", the function is repeated for them too!
If objSubFolders.Count > 0 Then
For Each objTempFolder in objSubFolders
strArgument = strFolder & "\" & objTempFolder.Name
DisplayFolderTree = DisplayFolderTree & DisplayFolderTree ( strArgument, intTreeLevel+1 )
Next
End If

' When the control reaches HERE! This means your function has ended
' Now no more recursive calls to the function will take place

End Function
%>


Conclusion

The recursive functions solve the problem in a method much identical to their real-life solutions. In the example above, we simply display the name of current folder. Then we obtain a list of subfolders and repeat the function for each of them.

While using iterative functions we know about the number of iterations, ie. how many times a loop will be executed, in advance.

In the factorial problem, recursion is not necessary because we know that a loop will be executed until the value of the number becomes 1. But, in Folder Tree example, we never know how many subfolders may be present in a folder and how many iterations will be required. For this purpose, the most suitable solution is to call the function recursively to display information about a folder and its subfolders.

Source: http://www.15seconds.com/

Wednesday, July 9, 2008

Best Practices for Speeding Up Your Web Site

The Exceptional Performance team has identified a number of best practices for making web pages fast.

Minimize HTTP Requests

80% of the end-user response time is spent on the front-end. Most of this time is tied up in downloading all the components in the page: images, stylesheets, scripts, Flash, etc. Reducing the number of components in turn reduces the number of HTTP requests required to render the page. This is the key to faster pages.

One way to reduce the number of components in the page is to simplify the page's design. But is there a way to build pages with richer content while also achieving fast response times? Here are some techniques for reducing the number of HTTP requests, while still supporting rich page designs.

Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining all CSS into a single stylesheet. Combining files is more challenging when the scripts and stylesheets vary from page to page, but making this part of your release process improves response times.

CSS Sprites are the preferred method for reducing the number of image requests. Combine your background images into a single image and use the CSS background-image and background-position properties to display the desired image segment.

Image maps combine multiple images into a single image. The overall size is about the same, but reducing the number of HTTP requests speeds up the page. Image maps only work if the images are contiguous in the page, such as a navigation bar. Defining the coordinates of image maps can be tedious and error prone. Using image maps for navigation is not accessible too, so it's not recommended.

Inline images use the data: URL scheme to embed the image data in the actual page. This can increase the size of your HTML document. Combining inline images into your (cached) stylesheets is a way to reduce HTTP requests and avoid increasing the size of your pages. Inline images are not yet supported across all major browsers.

Reducing the number of HTTP requests in your page is the place to start. This is the most important guideline for improving performance for first time visitors. As described in Tenni Theurer's blog post Browser Cache Usage - Exposed!, 40-60% of daily visitors to your site come in with an empty cache. Making your page fast for these first time visitors is key to a better user experience.

Use a Content Delivery Network

The user's proximity to your web server has an impact on response times. Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user's perspective. But where should you start?

As a first step to implementing geographically dispersed content, don't attempt to redesign your web application to work in a distributed architecture. Depending on the application, changing the architecture could include daunting tasks such as synchronizing session state and replicating database transactions across server locations. Attempts to reduce the distance between users and your content could be delayed by, or never pass, this application architecture step.

Remember that 80-90% of the end-user response time is spent downloading all the components in the page: images, stylesheets, scripts, Flash, etc. This is the Performance Golden Rule. Rather than starting with the difficult task of redesigning your application architecture, it's better to first disperse your static content. This not only achieves a bigger reduction in response times, but it's easier thanks to content delivery networks.

A content delivery network (CDN) is a collection of web servers distributed across multiple locations to deliver content more efficiently to users. The server selected for delivering content to a specific user is typically based on a measure of network proximity. For example, the server with the fewest network hops or the server with the quickest response time is chosen.

Some large Internet companies own their own CDN, but it's cost-effective to use a CDN service provider, such as Akamai Technologies, Mirror Image Internet, or Limelight Networks. For start-up companies and private web sites, the cost of a CDN service can be prohibitive, but as your target audience grows larger and becomes more global, a CDN is necessary to achieve fast response times. At Yahoo!, properties that moved static content off their application web servers to a CDN improved end-user response times by 20% or more. Switching to a CDN is a relatively easy code change that will dramatically improve the speed of your web site.

Add an Expires or a Cache-Control Header

There are two things in this rule:

  • For static components: implement "Never expire" policy by setting far future Expires header
  • For dynamic components: use an appropriate Cache-Control header to help the browser with conditional requests

Web page designs are getting richer and richer, which means more scripts, stylesheets, images, and Flash in the page. A first-time visitor to your page may have to make several HTTP requests, but by using the Expires header you make those components cacheable. This avoids unnecessary HTTP requests on subsequent page views. Expires headers are most often used with images, but they should be used on all components including scripts, stylesheets, and Flash components.

Browsers (and proxies) use a cache to reduce the number and size of HTTP requests, making web pages load faster. A web server uses the Expires header in the HTTP response to tell the client how long a component can be cached. This is a far future Expires header, telling the browser that this response won't be stale until April 15, 2010.

      Expires: Thu, 15 Apr 2010 20:00:00 GMT

If your server is Apache, use the ExiresDefault directive to set an expiration date relative to the current date. This example of the ExpiresDefault directive sets the Expires date 10 years out from the time of the request.

      ExpiresDefault "access plus 10 years"

Keep in mind, if you use a far future Expires header you have to change the component's filename whenever the component changes. At Yahoo! we often make this step part of the build process: a version number is embedded in the component's filename, for example, yahoo_2.0.6.js.

Using a far future Expires header affects page views only after a user has already visited your site. It has no effect on the number of HTTP requests when a user visits your site for the first time and the browser's cache is empty. Therefore the impact of this performance improvement depends on how often users hit your pages with a primed cache. (A "primed cache" already contains all of the components in the page.) We measured this at Yahoo! and found the number of page views with a primed cache is 75-85%. By using a far future Expires header, you increase the number of components that are cached by the browser and re-used on subsequent page views without sending a single byte over the user's Internet connection.

Gzip Components

The time it takes to transfer an HTTP request and response across the network can be significantly reduced by decisions made by front-end engineers. It's true that the end-user's bandwidth speed, Internet service provider, proximity to peering exchange points, etc. are beyond the control of the development team. But there are other variables that affect response times. Compression reduces response times by reducing the size of the HTTP response.

Starting with HTTP/1.1, web clients indicate support for compression with the Accept-Encoding header in the HTTP request.

      Accept-Encoding: gzip, deflate

If the web server sees this header in the request, it may compress the response using one of the methods listed by the client. The web server notifies the web client of this via the Content-Encoding header in the response.

      Content-Encoding: gzip

Gzip is the most popular and effective compression method at this time. It was developed by the GNU project and standardized by RFC 1952. The only other compression format you're likely to see is deflate, but it's less effective and less popular.

Gzipping generally reduces the response size by about 70%. Approximately 90% of today's Internet traffic travels through browsers that claim to support gzip. If you use Apache, the module configuring gzip depends on your version: Apache 1.3 uses mod_gzip while Apache 2.x uses mod_deflate.

There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically.

Servers choose what to gzip based on file type, but are typically too limited in what they decide to compress. Most web sites gzip their HTML documents. It's also worthwhile to gzip your scripts and stylesheets, but many web sites miss this opportunity. In fact, it's worthwhile to compress any text response including XML and JSON. Image and PDF files should not be gzipped because they are already compressed. Trying to gzip them not only wastes CPU but can potentially increase file sizes.

Gzipping as many file types as possible is an easy way to reduce page weight and accelerate the user experience.

Put Stylesheets at the Top

While researching performance at Yahoo!, we discovered that moving stylesheets to the document HEAD makes pages apprear to be loading faster. This is because putting stylesheets in the HEAD allows the page to render progressively.

Front-end engineers that care about performance want a page to load progressively; that is, we want the browser to display whatever content it has as soon as possible. This is especially important for pages with a lot of content and for users on slower Internet connections. The importance of giving users visual feedback, such as progress indicators, has been well researched and documented. In our case the HTML page is the progress indicator! When the browser loads the page progressively the header, the navigation bar, the logo at the top, etc. all serve as visual feedback for the user who is waiting for the page. This improves the overall user experience.

The problem with putting stylesheets near the bottom of the document is that it prohibits progressive rendering in many browsers, including Internet Explorer. These browsers block rendering to avoid having to redraw elements of the page if their styles change. The user is stuck viewing a blank white page.

The HTML specification clearly states that stylesheets are to be included in the HEAD of the page: "Unlike A, [LINK] may only appear in the HEAD section of a document, although it may appear any number of times." Neither of the alternatives, the blank white screen or flash of unstyled content, are worth the risk. The optimal solution is to follow the HTML specification and load your stylesheets in the document HEAD.

Put Scripts at the Bottom

The problem caused by scripts is that they block parallel downloads. The HTTP/1.1 specification suggests that browsers download no more than two components in parallel per hostname. If you serve your images from multiple hostnames, you can get more than two downloads to occur in parallel. While a script is downloading, however, the browser won't start any other downloads, even on different hostnames.

In some situations it's not easy to move scripts to the bottom. If, for example, the script uses document.write to insert part of the page's content, it can't be moved lower in the page. There might also be scoping issues. In many cases, there are ways to workaround these situations.

An alternative suggestion that often comes up is to use deferred scripts. The DEFER attribute indicates that the script does not contain document.write, and is a clue to browsers that they can continue rendering. Unfortunately, Firefox doesn't support the DEFER attribute. In Internet Explorer, the script may be deferred, but not as much as desired. If a script can be deferred, it can also be moved to the bottom of the page. That will make your web pages load faster.

Avoid CSS Expressions

CSS expressions are a powerful (and dangerous) way to set CSS properties dynamically. They're supported in Internet Explorer, starting with version 5. As an example, the background color could be set to alternate every hour using CSS expressions.

      background-color: expression( (new Date()).getHours()%2 ? "#B8D4FF" : "#F08A00" );

As shown here, the expression method accepts a JavaScript expression. The CSS property is set to the result of evaluating the JavaScript expression. The expression method is ignored by other browsers, so it is useful for setting properties in Internet Explorer needed to create a consistent experience across browsers.

The problem with expressions is that they are evaluated more frequently than most people expect. Not only are they evaluated when the page is rendered and resized, but also when the page is scrolled and even when the user moves the mouse over the page. Adding a counter to the CSS expression allows us to keep track of when and how often a CSS expression is evaluated. Moving the mouse around the page can easily generate more than 10,000 evaluations.

One way to reduce the number of times your CSS expression is evaluated is to use one-time expressions, where the first time the expression is evaluated it sets the style property to an explicit value, which replaces the CSS expression. If the style property must be set dynamically throughout the life of the page, using event handlers instead of CSS expressions is an alternative approach. If you must use CSS expressions, remember that they may be evaluated thousands of times and could affect the performance of your page.

Make JavaScript and CSS External

Many of these performance rules deal with how external components are managed. However, before these considerations arise you should ask a more basic question: Should JavaScript and CSS be contained in external files, or inlined in the page itself?

Using external files in the real world generally produces faster pages because the JavaScript and CSS files are cached by the browser. JavaScript and CSS that are inlined in HTML documents get downloaded every time the HTML document is requested. This reduces the number of HTTP requests that are needed, but increases the size of the HTML document. On the other hand, if the JavaScript and CSS are in external files cached by the browser, the size of the HTML document is reduced without increasing the number of HTTP requests.

The key factor, then, is the frequency with which external JavaScript and CSS components are cached relative to the number of HTML documents requested. This factor, although difficult to quantify, can be gauged using various metrics. If users on your site have multiple page views per session and many of your pages re-use the same scripts and stylesheets, there is a greater potential benefit from cached external files.

Many web sites fall in the middle of these metrics. For these sites, the best solution generally is to deploy the JavaScript and CSS as external files. The only exception where inlining is preferable is with home pages, such as Yahoo!'s front page and My Yahoo!. Home pages that have few (perhaps only one) page view per session may find that inlining JavaScript and CSS results in faster end-user response times.

For front pages that are typically the first of many page views, there are techniques that leverage the reduction of HTTP requests that inlining provides, as well as the caching benefits achieved through using external files. One such technique is to inline JavaScript and CSS in the front page, but dynamically download the external files after the page has finished loading. Subsequent pages would reference the external files that should already be in the browser's cache.

Reduce DNS Lookups

The Domain Name System (DNS) maps hostnames to IP addresses, just as phonebooks map people's names to their phone numbers. When you type www.yahoo.com into your browser, a DNS resolver contacted by the browser returns that server's IP address. DNS has a cost. It typically takes 20-120 milliseconds for DNS to lookup the IP address for a given hostname. The browser can't download anything from this hostname until the DNS lookup is completed.

DNS lookups are cached for better performance. This caching can occur on a special caching server, maintained by the user's ISP or local area network, but there is also caching that occurs on the individual user's computer. The DNS information remains in the operating system's DNS cache (the "DNS Client service" on Microsoft Windows). Most browsers have their own caches, separate from the operating system's cache. As long as the browser keeps a DNS record in its own cache, it doesn't bother the operating system with a request for the record.

Internet Explorer caches DNS lookups for 30 minutes by default, as specified by the DnsCacheTimeout registry setting. Firefox caches DNS lookups for 1 minute, controlled by the network.dnsCacheExpiration configuration setting. (Fasterfox changes this to 1 hour.)

When the client's DNS cache is empty (for both the browser and the operating system), the number of DNS lookups is equal to the number of unique hostnames in the web page. This includes the hostnames used in the page's URL, images, script files, stylesheets, Flash objects, etc. Reducing the number of unique hostnames reduces the number of DNS lookups.

Reducing the number of unique hostnames has the potential to reduce the amount of parallel downloading that takes place in the page. Avoiding DNS lookups cuts response times, but reducing parallel downloads may increase response times. My guideline is to split these components across at least two but no more than four hostnames. This results in a good compromise between reducing DNS lookups and allowing a high degree of parallel downloads.

Minify JavaScript and CSS

Minification is the practice of removing unnecessary characters from code to reduce its size thereby improving load times. When code is minified all comments are removed, as well as unneeded white space characters (space, newline, and tab). In the case of JavaScript, this improves response time performance because the size of the downloaded file is reduced. Two popular tools for minifying JavaScript code are JSMin and YUI Compressor. The YUI compressor can also minify CSS.

Obfuscation is an alternative optimization that can be applied to source code. It's more complex than minification and thus more likely to generate bugs as a result of the obfuscation step itself. In a survey of ten top U.S. web sites, minification achieved a 21% size reduction versus 25% for obfuscation. Although obfuscation has a higher size reduction, minifying JavaScript is less risky.

In addition to minifying external scripts and styles, inlined <script> and <style> blocks can and should also be minified. Even if you gzip your scripts and styles, minifying them will still reduce the size by 5% or more. As the use and size of JavaScript and CSS increases, so will the savings gained by minifying your code.

Avoid Redirects

Redirects are accomplished using the 301 and 302 status codes. Here's an example of the HTTP headers in a 301 response:

      HTTP/1.1 301 Moved Permanently        Location: http://example.com/newuri        Content-Type: text/html

The browser automatically takes the user to the URL specified in the Location field. All the information necessary for a redirect is in the headers. The body of the response is typically empty. Despite their names, neither a 301 nor a 302 response is cached in practice unless additional headers, such as Expires or Cache-Control, indicate it should be. The meta refresh tag and JavaScript are other ways to direct users to a different URL, but if you must do a redirect, the preferred technique is to use the standard 3xx HTTP status codes, primarily to ensure the back button works correctly.

The main thing to remember is that redirects slow down the user experience. Inserting a redirect between the user and the HTML document delays everything in the page since nothing in the page can be rendered and no components can start being downloaded until the HTML document has arrived.

One of the most wasteful redirects happens frequently and web developers are generally not aware of it. It occurs when a trailing slash (/) is missing from a URL that should otherwise have one. For example, going to http://astrology.yahoo.com/astrology results in a 301 response containing a redirect to http://astrology.yahoo.com/astrology/ (notice the added trailing slash). This is fixed in Apache by using Alias or mod_rewrite, or the DirectorySlash directive if you're using Apache handlers.

Connecting an old web site to a new one is another common use for redirects. Others include connecting different parts of a website and directing the user based on certain conditions (type of browser, type of user account, etc.). Using a redirect to connect two web sites is simple and requires little additional coding. Although using redirects in these situations reduces the complexity for developers, it degrades the user experience. Alternatives for this use of redirects include using Alias and mod_rewrite if the two code paths are hosted on the same server. If a domain name change is the cause of using redirects, an alternative is to create a CNAME (a DNS record that creates an alias pointing from one domain name to another) in combination with Alias or mod_rewrite.

Remove Duplicate Scripts

It hurts performance to include the same JavaScript file twice in one page. This isn't as unusual as you might think. A review of the ten top U.S. web sites shows that two of them contain a duplicated script. Two main factors increase the odds of a script being duplicated in a single web page: team size and number of scripts. When it does happen, duplicate scripts hurt performance by creating unnecessary HTTP requests and wasted JavaScript execution.

Unnecessary HTTP requests happen in Internet Explorer, but not in Firefox. In Internet Explorer, if an external script is included twice and is not cacheable, it generates two HTTP requests during page loading. Even if the script is cacheable, extra HTTP requests occur when the user reloads the page.

In addition to generating wasteful HTTP requests, time is wasted evaluating the script multiple times. This redundant JavaScript execution happens in both Firefox and Internet Explorer, regardless of whether the script is cacheable.

One way to avoid accidentally including the same script twice is to implement a script management module in your templating system. The typical way to include a script is to use the SCRIPT tag in your HTML page.

      <script type="text/javascript" src="menu_1.0.17.js"></script>

An alternative in PHP would be to create a function called insertScript.

      <?php insertScript("menu.js") ?>

In addition to preventing the same script from being inserted multiple times, this function could handle other issues with scripts, such as dependency checking and adding version numbers to script filenames to support far future Expires headers.

Configure ETags

Entity tags (ETags) are a mechanism that web servers and browsers use to determine whether the component in the browser's cache matches the one on the origin server. (An "entity" is another word a "component": images, scripts, stylesheets, etc.) ETags were added to provide a mechanism for validating entities that is more flexible than the last-modified date. An ETag is a string that uniquely identifies a specific version of a component. The only format constraints are that the string be quoted. The origin server specifies the component's ETag using the ETag response header.

      HTTP/1.1 200 OK        Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT        ETag: "10c24bc-4ab-457e1c1f"        Content-Length: 12195

Later, if the browser has to validate a component, it uses the If-None-Match header to pass the ETag back to the origin server. If the ETags match, a 304 status code is returned reducing the response by 12195 bytes for this example.

      GET /i/yahoo.gif HTTP/1.1        Host: us.yimg.com        If-Modified-Since: Tue, 12 Dec 2006 03:03:59 GMT        If-None-Match: "10c24bc-4ab-457e1c1f"        HTTP/1.1 304 Not Modified

The problem with ETags is that they typically are constructed using attributes that make them unique to a specific server hosting a site. ETags won't match when a browser gets the original component from one server and later tries to validate that component on a different server, a situation that is all too common on Web sites that use a cluster of servers to handle requests. By default, both Apache and IIS embed data in the ETag that dramatically reduces the odds of the validity test succeeding on web sites with multiple servers.

The ETag format for Apache 1.3 and 2.x is inode-size-timestamp. Although a given file may reside in the same directory across multiple servers, and have the same file size, permissions, timestamp, etc., its inode is different from one server to the next.

IIS 5.0 and 6.0 have a similar issue with ETags. The format for ETags on IIS is Filetimestamp:ChangeNumber. A ChangeNumber is a counter used to track configuration changes to IIS. It's unlikely that the ChangeNumber is the same across all IIS servers behind a web site.

The end result is ETags generated by Apache and IIS for the exact same component won't match from one server to another. If the ETags don't match, the user doesn't receive the small, fast 304 response that ETags were designed for; instead, they'll get a normal 200 response along with all the data for the component. If you host your web site on just one server, this isn't a problem. But if you have multiple servers hosting your web site, and you're using Apache or IIS with the default ETag configuration, your users are getting slower pages, your servers have a higher load, you're consuming greater bandwidth, and proxies aren't caching your content efficiently. Even if your components have a far future Expires header, a conditional GET request is still made whenever the user hits Reload or Refresh.

If you're not taking advantage of the flexible validation model that ETags provide, it's better to just remove the ETag altogether. The Last-Modified header validates based on the component's timestamp. And removing the ETag reduces the size of the HTTP headers in both the response and subsequent requests. This Microsoft Support article describes how to remove ETags. In Apache, this is done by simply adding the following line to your Apache configuration file:

      FileETag none

Make Ajax Cacheable

One of the cited benefits of Ajax is that it provides instantaneous feedback to the user because it requests information asynchronously from the backend web server. However, using Ajax is no guarantee that the user won't be twiddling his thumbs waiting for those asynchronous JavaScript and XML responses to return. In many applications, whether or not the user is kept waiting depends on how Ajax is used. For example, in a web-based email client the user will be kept waiting for the results of an Ajax request to find all the email messages that match their search criteria. It's important to remember that "asynchronous" does not imply "instantaneous".

To improve performance, it's important to optimize these Ajax responses. The most important way to improve the performance of Ajax is to make the responses cacheable, as discussed in Add an Expires or a Cache-Control Header. Some of the other rules also apply to Ajax:

Let's look at an example. A Web 2.0 email client might use Ajax to download the user's address book for autocompletion. If the user hasn't modified her address book since the last time she used the email web app, the previous address book response could be read from cache if that Ajax response was made cacheable with a future Expires or Cache-Control header. The browser must be informed when to use a previously cached address book response versus requesting a new one. This could be done by adding a timestamp to the address book Ajax URL indicating the last time the user modified her address book, for example, &t=1190241612. If the address book hasn't been modified since the last download, the timestamp will be the same and the address book will be read from the browser's cache eliminating an extra HTTP roundtrip. If the user has modified her address book, the timestamp ensures the new URL doesn't match the cached response, and the browser will request the updated address book entries.

Even though your Ajax responses are created dynamically, and might only be applicable to a single user, they can still be cached. Doing so will make your Web 2.0 apps faster.

Flush the Buffer Early

When users request a page, it can take anywhere from 200 to 500ms for the backend server to stitch together the HTML page. During this time, the browser is idle as it waits for the data to arrive. In PHP you have the function flush(). It allows you to send your partially ready HTML response to the browser so that the browser can start fetching components while your backend is busy with the rest of the HTML page. The benefit is mainly seen on busy backends or light frontends.

A good place to consider flushing is right after the HEAD because the HTML for the head is usually easier to produce and it allows you to include any CSS and JavaScript files for the browser to start fetching in parallel while the backend is still processing.

Example:

      ... <!-- css, js -->      </head>      <?php flush(); ?>      <body>        ... <!-- content -->  

Yahoo! search pioneered research and real user testing to prove the benefits of using this technique.

Use GET for AJAX Requests

The Yahoo! Mail team found that when using XMLHttpRequest, POST is implemented in the browsers as a two-step process: sending the headers first, then sending data. So it's best to use GET, which only takes one TCP packet to send (unless you have a lot of cookies). The maximum URL length in IE is 2K, so if you send more than 2K data you might not be able to use GET.

An interesting side affect is that POST without actually posting any data behaves like GET. Based on the HTTP specs, GET is meant for retrieving information, so it makes sense (semantically) to use GET when you're only requesting data, as opposed to sending data to be stored server-side.

Post-load Components

You can take a closer look at your page and ask yourself: "What's absolutely required in order to render the page initially?". The rest of the content and components can wait.

JavaScript is an ideal candidate for splitting before and after the onload event. For example if you have JavaScript code and libraries that do drag and drop and animations, those can wait, because dragging elements on the page comes after the initial rendering. Other places to look for candidates for post-loading include hidden content (content that appears after a user action) and images below the fold.

Tools to help you out in your effort: YUI Image Loader allows you to delay images below the fold and the YUI Get utility is an easy way to include JS and CSS on the fly. For an example in the wild take a look at Yahoo! Home Page with Firebug's Net Panel turned on.

It's good when the performance goals are inline with other web development best practices. In this case, the idea of progressive enhancement tells us that JavaScript, when supported, can improve the user experience but you have to make sure the page works even without JavaScript. So after you've made sure the page works fine, you can enhance it with some post-loaded scripts that give you more bells and whistles such as drag and drop and animations.

Preload Components

Preload may look like the opposite of post-load, but it actually has a different goal. By preloading components you can take advantage of the time the browser is idle and request components (like images, styles and scripts) you'll need in the future. This way when the user visits the next page, you could have most of the components already in the cache and your page will load much faster for the user.

There are actually several types of preloading:

  • Unconditional preload - as soon as onload fires, you go ahead and fetch some extra components. Check google.com for an example of how a sprite image is requested onload. This sprite image is not needed on the google.com homepage, but it is needed on the consecutive search result page.
  • Conditional preload - based on a user action you make an educated guess where the user is headed next and preload accordingly. On search.yahoo.com you can see how some extra components are requested after you start typing in the input box.
  • Anticipated preload - preload in advance before launching a redesign. It often happens after a redesign that you hear: "The new site is cool, but it's slower than before". Part of the problem could be that the users were visiting your old site with a full cache, but the new one is always an empty cache experience. You can mitigate this side effect by preloading some components before you even launched the redesign. Your old site can use the time the browser is idle and request images and scripts that will be used by the new site

Reduce the Number of DOM Elements

A complex page means more bytes to download and it also means slower DOM access in JavaScript. It makes a difference if you loop through 500 or 5000 DOM elements on the page when you want to add an event handler for example.

A high number of DOM elements can be a symptom that there's something that should be improved with the markup of the page without necessarily removing content. Are you using nested tables for layout purposes? Are you throwing in more <div>s only to fix layout issues? Maybe there's a better and more semantically correct way to do your markup.

A great help with layouts are the YUI CSS utilities: grids.css can help you with the overall layout, fonts.css and reset.css can help you strip away the browser's defaults formatting. This is a chance to start fresh and think about your markup, for example use <div>s only when it makes sense semantically, and not because it renders a new line.

The number of DOM elements is easy to test, just type in Firebug's console:
document.getElementsByTagName('*').length

And how many DOM elements are too many? Check other similar pages that have good markup. For example the Yahoo! Home Page is a pretty busy page and still under 700 elements (HTML tags).

Split Components Across Domains

Splitting components allows you to maximize parallel downloads. Make sure you're using not more than 2-4 domains because of the DNS lookup penalty. For example, you can host your HTML and dynamic content on www.example.org and split static components between static1.example.org and static2.example.org

For more information check "Maximizing Parallel Downloads in the Carpool Lane" by Tenni Theurer and Patty Chi.

Minimize the Number of iframes

Iframes allow an HTML document to be inserted in the parent document. It's important to understand how iframes work so they can be used effectively.

<iframe> pros:

  • Helps with slow third-party content like badges and ads
  • Security sandbox
  • Download scripts in parallel

<iframe> cons:

  • Costly even if blank
  • Blocks page onload
  • Non-semantic

No 404s

HTTP requests are expensive so making an HTTP request and getting a useless response (i.e. 404 Not Found) is totally unnecessary and will slow down the user experience without any benefit.

Some sites have helpful 404s "Did you mean X?", which is great for the user experience but also wastes server resources (like database, etc). Particularly bad is when the link to an external JavaScript is wrong and the result is a 404. First, this download will block parallel downloads. Next the browser may try to parse the 404 response body as if it were JavaScript code, trying to find something usable in it.

HTTP cookies are used for a variety of reasons such as authentication and personalization. Information about cookies is exchanged in the HTTP headers between web servers and browsers. It's important to keep the size of cookies as low as possible to minimize the impact on the user's response time.

For more information check "When the Cookie Crumbles" by Tenni Theurer and Patty Chi. The take-home of this research:

  • Eliminate unnecessary cookies
  • Keep cookie sizes as low as possible to minimize the impact on the user response time
  • Be mindful of setting cookies at the appropriate domain level so other sub-domains are not affected
  • Set an Expires date appropriately. An earlier Expires date or none removes the cookie sooner, improving the user response time

When the browser makes a request for a static image and sends cookies together with the request, the server doesn't have any use for those cookies. So they only create network traffic for no good reason. You should make sure static components are requested with cookie-free requests. Create a subdomain and host all your static components there.

If your domain is www.example.org, you can host your static components on static.example.org. However, if you've already set cookies on the top-level domain example.org as opposed to www.example.org, then all the requests to static.example.org will include those cookies. In this case, you can buy a whole new domain, host your static components there, and keep this domain cookie-free. Yahoo! uses yimg.com, YouTube uses ytimg.com, Amazon uses images-amazon.com and so on.

Another benefit of hosting static components on a cookie-free domain is that some proxies might refuse to cache the components that are requested with cookies. On a related note, if you wonder if you should use example.org or www.example.org for your home page, consider the cookie impact. Omitting www leaves you no choice but to write cookies to *.example.org, so for performance reasons it's best to use the www subdomain and write the cookies to that subdomain.

Minimize DOM Access

Accessing DOM elements with JavaScript is slow so in order to have a more responsive page, you should:

  • Cache references to accessed elements
  • Update nodes "offline" and then add them to the tree
  • Avoid fixing layout with JavaScript

For more information check the YUI theatre's "High Performance Ajax Applications" by Julien Lecomte.

Develop Smart Event Handlers

Sometimes pages feel less responsive because of too many event handlers attached to different elements of the DOM tree which are then executed too often. That's why using event delegation is a good approach. If you have 10 buttons inside a div, attach only one event handler to the div wrapper, instead of one handler for each button. Events bubble up so you'll be able to catch the event and figure out which button it originated from.

You also don't need to wait for the onload event in order to start doing something with the DOM tree. Often all you need is the element you want to access to be available in the tree. You don't have to wait for all images to be downloaded. DOMContentLoaded is the event you might consider using instead of onload, but until it's available in all browsers, you can use the YUI Event utility, which has an onAvailable method.

For more information check the YUI theatre's "High Performance Ajax Applications" by Julien Lecomte.

One of the previous best practices states that CSS should be at the top in order to allow for progressive rendering.

In IE @import behaves the same as using <link> at the bottom of the page, so it's best not to use it.

Avoid Filters

The IE-proprietary AlphaImageLoader filter aims to fix a problem with semi-transparent true color PNGs in IE versions < 7. The problem with this filter is that it blocks rendering and freezes the browser while the image is being downloaded. It also increases memory consumption and is applied per element, not per image, so the problem is multiplied.

The best approach is to avoid AlphaImageLoader completely and use gracefully degrading PNG8 instead, which are fine in IE. If you absolutely need AlphaImageLoader, use the underscore hack _filter as to not penalize your IE7+ users.

Optimize Images

After a designer is done with creating the images for your web page, there are still some things you can try before you FTP those images to your web server.

  • You can check the GIFs and see if they are using a palette size corresponding to the number of colors in the image. Using imagemagick it's easy to check using
    identify -verbose image.gif
    When you see an image useing 4 colors and a 256 color "slots" in the palette, there is room for improvement.
  • Try converting GIFs to PNGs and see if there is a saving. More often than not, there is. Developers often hesitate to use PNGs due to the limited support in browsers, but this is now a thing of the past. The only real problem is alpha-transparency in true color PNGs, but then again, GIFs are not true color and don't support variable transparency either. So anything a GIF can do, a palette PNG (PNG8) can do too (except for animations). This simple imagemagick command results in totally safe-to-use PNGs:
    convert image.gif image.png
    "All we are saying is: Give PiNG a Chance!"
  • Run pngcrush (or any other PNG optimizer tool) on all your PNGs. Example:
    pngcrush image.png -rem alla -reduce -brute result.png
  • Run jpegtran on all your JPEGs. This tool does lossless JPEG operations such as rotation and can also be used to optimize and remove comments and other useless information (such as EXIF information) from your images.
    jpegtran -copy none -optimize -perfect src.jpg dest.jpg

Optimize CSS Sprites

  • Arranging the images in the sprite horizontally as opposed to vertically usually results in a smaller file size.
  • Combining similar colors in a sprite helps you keep the color count low, ideally under 256 colors so to fit in a PNG8.
  • "Be mobile-friendly" and don't leave big gaps between the images in a sprite. This doesn't affect the file size as much but requires less memory for the user agent to decompress the image into a pixel map. 100x100 image is 10 thousand pixels, where 1000x1000 is 1 million pixels

Don't Scale Images in HTML

Don't use a bigger image than you need just because you can set the width and height in HTML. If you need
<img width="100" height="100" src="mycat.jpg" alt="My Cat" />
then your image (mycat.jpg) should be 100x100px rather than a scaled down 500x500px image.

Make favicon.ico Small and Cacheable

The favicon.ico is an image that stays in the root of your server. It's a necessary evil because even if you don't care about it the browser will still request it, so it's better not to respond with a 404 Not Found. Also since it's on the same server, cookies are sent every time it's requested. This image also interferes with the download sequence, for example in IE when you request extra components in the onload, the favicon will be downloaded before these extra components.

So to mitigate the drawbacks of having a favicon.ico make sure:

  • It's small, preferably under 1K.
  • Set Expires header with what you feel comfortable (since you cannot rename it if you decide to change it). You can probably safely set the Expires header a few months in the future. You can check the last modified date of your current favicon.ico to make an informed decision.

Imagemagick can help you create small favicons

Keep Components under 25K

This restriction is related to the fact that iPhone won't cache components bigger than 25K. Note that this is the uncompressed size. This is where minification is important because gzip alone may not be sufficient.

For more information check "Performance Research, Part 5: iPhone Cacheability - Making it Stick" by Wayne Shea and Tenni Theurer.

Pack Components into a Multipart Document

Packing components into a multipart document is like an email with attachments, it helps you fetch several components with one HTTP request (remember: HTTP requests are expensive). When you use this technique, first check if the user agent supports it (iPhone does not).

Source: Yahoo! Developer Network

Thursday, June 5, 2008

PHP Security / SQL Security - Part 1

Web Security: The Big Picture

Whether your site is the web presence for a large multinational, a gallery showing your product range and inviting potential customers to come into the shop, or a personal site exhibiting your holiday photos, web security matters. After the hard work put in to make your site look good and respond to your users, the last thing you want is for a malicious hacker to come along, perform a PHP hack and break it somehow.

There are a number of problems in web security, and unfortunately not all of them have definite solutions, but here we'll look at some of the problems that should be considered every time you set out to write a PHP script to avoid a PHP hack attack. These are the problems which, with well-designed code, can be eliminated entirely. Before looking in detail at the solutions, though, lets take a moment to define the problems themselves.

SQL Injection

In this attack, a user is able to execute SQL queries in your website's database. This attack is usually performed by entering text into a form field which causes a subsequent SQL query, generated from the PHP form processing code, to execute part of the content of the form field as though it were SQL. The effects of this attack range from the harmless (simply using SELECT to pull another data set) to the devastating (DELETE, for instance). In more subtle attacks, data could be changed, or new data added.

Directory Traversal

This attack can occur anywhere user-supplied data (from a form field or uploaded filename, for example) is used in a filesystem operation. If a user specifies “../../../../../../etc/passwd” as form data, and your script appends that to a directory name to obtain user-specific files, this string could lead to the inclusion of the password file contents, instead of the intended file. More severe cases involve file operations such as moving and deleting, which allow an attacker to make arbitrary changes to your filesystem structure.

Authentication Issues

Authentication issues involve users gaining access to something they shouldn't, but to which other users should. An example would be a user who was able to steal (or construct) a cookie allowing them to login to your site under an Administrator session, and therefore be able to change anything they liked.

Remote Scripts (XSS)

XSS, or Cross-Site Scripting (also sometimes referred to as CSS, but this can be confused with Cascading Style Sheets, something entirely different!) is the process of exploiting a security hole in one site to run arbitrary code on that site's server. The code is usually included into a running PHP script from a remote location. This is a serious attack which could allow any code the attacker chooses to be run on the vulnerable server, with all of the permissions of the user hosting the script, including database and filesystem access.

Processing User Data – Form Input Verification & HTML Display

Validating Input And Stripping Tags

When a user enters information into a form which is to be later processed on your site, they have the power to enter anything they want. Code which processes form input should be carefully written to ensure that the input is as requested; password fields have the required level of complexity, e-mail fields have at least some characters, an @ sign, some more characters, a period, and two or more characters at the end, zip or postal codes are of the required format, and so on.

Each of these may be verified using regular expressions, which scan the input for certain patterns. An example for e-mail address verification is the PHP code shown below. This evaluates to true if an e-mail address was entered in the field named 'email'.

preg_match('/^.+@.+\..{2,3}$/',$_POST['email']);

This code just constructs a regular expression based on the format described above for an e-mail address. Note that this will return true for anything with an @ sign and a dot followed by 2 or 3 characters. That is the general format for an e-mail address, but it doesn't mean that address necessarily exists; you'd have to send mail to it to be sure of that.

Interesting as this is, how does it relate to security? Well, consider a guestbook as an example. Here, users are invited to enter a message into a form, which then gets displayed on the HTML page along with everyone else's messages. For now, we won't go into database security issues, the problems dealt with below can occur whether the data is stored in a database, a file, or some other construct.

If a user enters data which contains HTML, or even JavaScript, then when the data is included into your HTML for display later, their HTML or JavaScript will also get included.

If your guestbook page displayed whatever was entered into the form field, and a user entered the following,

Hi, I <b>love</b> your site.

Then the effect is minimal, when displayed later, this would appear as,

Hi, I love your site.

Of course, when the user enters JavaScript, things can get a lot worse. For example, the data below, when entered into a form which does not prevent JavaScript ending up in the final displayed page, will cause the page to redirect to a different website. Obviously, this only works if the client has JavaScript enabled in their browser, but the vast majority of users do.

Hi, I love your site. Its great!<script
language=”JavaScript”>document.location=”http://www.acunetix.com/”;</script>

For a split second when this is displayed, the user will see,

Hi, I love your site. Its great!

The browser will then kick in and the page will be refreshed from www.acunetix.com. In this case, a fairly harmless alternative page, although it does result in a denial of service attack; users can no longer get to your guestbook.

Consider a case where this was entered into an online order form. Your order dispatchers would not be able to view the data because every time they tried, their browser would redirect to another site. Worse still, if the redirection occurred on a critical page for a large business, or the redirection was to a site containing objectionable material, custom may be lost as a result of the attack.

Fortunately, PHP provides a way to prevent this style of PHP hack attack. The functions strip_tags(), nl2br() and htmlspecialchars() are your friends, here.

strip_tags() removes any PHP or HTML tags from a string. This prevents the HTML display problems, the JavaScript execution (the <script> tag will no longer be present) and a variety of problems where there is a chance that PHP code could be executed.

nl2br() converts newline characters in the input to <br /> HTML tags. This allows you to format multi-line input correctly, and is mentioned here only because it is important to run strip_tags() prior to running nl2br() on your data, otherwise the newly inserted <br /> tags will be stripped out when strip_tags() is run!

Finally, htmlspecialchars() will entity-quote characters such as <, > and & remaining in the input after strip_tags() has run. This prevents them being misinterpreted as HTML and makes sure they are displayed properly in any output.

Having presented those three functions, there are a few points to make about their usage. Clearly, nl2br() and htmlspecialchars() are suited for output formatting, called on data just before it is output, allowing the database or file-stored data to retain normal formatting such as newlines and characters such as &. These functions are designed mainly to ensure that output of data into an HTML page is presented neatly, even after running strip_tags() on any input.

strip_tags(), on the other hand, should be run immediately on input of data, before any other processing occurs. The code below is a function to clean user input of any PHP or HTML tags, and works for both GET and POST request methods.

function _INPUT($name)
{
    if ($_SERVER['REQUEST_METHOD'] == 'GET')
        return strip_tags($_GET[$name]);
    if ($_SERVER['REQUEST_METHOD'] == 'POST')
        return strip_tags($_POST[$name]);
}

This function could easily be expanded to include cookies in the search for a variable name. I called it _INPUT because it directly parallels the $_ arrays which store user input. Note also that when using this function, it does not matter whether the page was requested with a GET or a POST method, the code can use _INPUT() and expect the correct value regardless of request method. To use this function, consider the following two lines of code, which both have the same effect, but the second strips the PHP and HTML tags first, thus increasing the security of the script.

$name = $_GET['name');
$name = _INPUT('name');

If data is to be entered into a database, more processing is needed to prevent SQL injection, which will be discussed later.

Executing Code Containing User Input

Another concern when dealing with user data is the possibility that it may be executed in PHP code or on the system shell. PHP provides the eval() function, which allows arbitrary PHP code within a string to be evaluated (run). There are also the system(), passthru() and exec() functions, and the backtick operator, all of which allow a string to be run as a command on the operating system shell.

Where possible, the use of all such functions should be avoided, especially where user input is entered into the command or code. An example of a situation where this can lead to attack is the following command, which would display the results of the command on the web page.

echo 'Your usage log:<br />';
$username = $_GET['username'];
passthru(“cat /logs/usage/$username”);

passthru() runs a command and displays the output as output from the PHP script, which is included into the final page the user sees. Here, the intent is obvious, a user can pass their username in a GET request such as usage.php?username=andrew and their usage log would be displayed in the browser window.

But what if the user passed the following URL?

usage.php?username=andrew;cat%20/etc/passwd

Here, the username value now contains a semicolon, which is a shell command terminator, and a new command afterwards. The %20 is a URL-Encoded space character, and is converted to a space automatically by PHP. Now, the command which gets run by passthru() is,

cat /logs/usage/andrew;cat /etc/passwd

Clearly this kind of command abuse cannot be allowed. An attacker could use this vulnerability to read, delete or modify any file the web server has access to. Luckily, once again, PHP steps in to provide a solution, in the form of the escapeshellarg() function. escapeshellarg() escapes any characters which could cause an argument or command to be terminated. As an example, any single or double quotes in the string are replaced with \' or \”, and semicolons are replaced with \;. These replacements, and any others performed by escapeshellarg(), ensure that code such as that presented below is safe to run.

$username = escapeshellarg($_GET['username']);
passthru(“cat /logs/usage/$username”);

Now, if the attacker attempts to read the password file using the request string above, the shell will attempt to access a file called “/logs/usage/andrew;cat /etc/passwd”, and will fail, since this file will almost certainly not exist.

It is generally considered that eval() called on code containing user input be avoided at all costs; there is almost always a better way to achieve the desired effect. However, if it must be done, ensure that strip_tags has been called, and that any quoting and character escapes have been performed.

Combining the above techniques to provide stripping of tags, escaping of special shell characters, entity-quoting of HTML and regular expression-based input validation, it is possible to construct secure web scripts with relatively little work over and above constructing one without the security considerations. In particular, using a function such as the _INPUT() presented above makes the secure version of input acquisition almost as painless as the insecure version PHP provides.

How to check for PHP vulnerabilities

The best way to check whether your web site & applications are vulnerable to PHP hack attacks is by using a Web Vulnerability Scanner. A Web Vulnerability Scanner crawls your entire website and automatically checks for vulnerabilities to PHP attacks. It will indicate which scripts are vulnerable so that you can fix the vulnerability easily. Besides PHP security vulnerabilities, a web application scanner will also check for SQL injection, Cross site scripting & other web vulnerabilities.

Acunetix Web Vulnerability Scanner ensures website security by automatically checking for SQL injection, Cross site scripting and other vulnerabilities. It checks password strength on authentication pages and automatically audits shopping carts, forms, dynamic content and other web applications. As the scan is being completed, the software produces detailed reports that pinpoint where vulnerabilities exist. Take a product tour or download the evaluation version today!

Scanning for XSS vulnerabilities with Acunetix WVS Free Edition!

To check whether your website has cross site scripting vulnerabilities, download the Free Edition from http://www.acunetix.com/cross-site-scripting/scanner.htm. This version will scan any website / web application for XSS vulnerabilities and it will also reveal all the essential information related to it, such as the vulnerability location and remediation techniques. Scanning for XSS is normally a quick exercise (depending on the size of the web-site).

Later In The Series

This series will go on to look at SQL databases, and protecting against SQL injection attacks, as well as file operations and session management, including a look at one of the features of PHP designed to increase security and avoid PHP hack attacks- the PHP Safe Mode.

Source: Website security

PHP / SQL Security - Part 2

Last Time...
In the previous article, I looked at processing and securing user input when it is to be redisplayed or executed as PHP code. Now its time to consider entering that data into a database, and cover the security issues which arise when doing so.

SQL Injection

SQL (Structured Query Language) is the language used to interface with many database systems, including MySQL, PostgreSQL and MSSQL. Certain words and characters are interpreted specially by SQL, as commands, separators, or command terminators, for instance.

When a user enters data into a form, there is nothing stopping them entering these special commands and characters. Consider the PHP code below:

$query = “INSERT INTO orders(address) VALUES('$_GET['address']')”;
$result = mysql_query($query);

A form with a textbox named address would be used to gather the information for this page. We'll ignore any other form elements for now, but obviously there'd be the order items, a name, possibly a price, a delivery date, and so on, which would also all need storing in a database.

Imagine a perfectly legitimate user comes along and enters the following address

14 King's Way
Kingston
Kingham County

The database would spit back an error because the SQL command would be malformed. In the query, the address value is surrounded by single quotes, because it is a string value. When the database hits the apostrophe in King's Way, it will treat it as the closing single quote, and end the string. The rest of the address will be treated as SQL commands. Since these “commands” don't exist, the database returns to PHP with an error.

Now consider an attacker entering the following information into the form:

14 Kings Way
Kingston
Kingham County');DELETE FROM orders *; INSERT INTO ORDERS(address) VALUES('Your data just got deleted by us. We win

Now, the command will succeed. The expected string data is presented, along with a closing quote. The opening ( after VALUES is closed, and the SQL command is terminated using a semicolon. After this, another command begins, one which tells the database to delete the entire contents of the orders table. Then, because the SQL hard-coded into the PHP contains another closing single quote, a third SQL command is entered, which leaves an open string value. This will be matched up with the final quote in the hard-coded SQL, and the entire command is syntactically correct, as far as SQL is concerned, and will therefore execute with no complaint.

Clearly, it is not desirable for any user to be able to issue arbitrary queries simply by posting data in a form. Luckily for us, as with the PHP and HTML input issues discussed in part 1, PHP provides a solution. The addslashes() and stripslashes() functions step in to prevent the above scenarios, and any number of similar attacks.

addslashes() will escape characters with a special meaning to SQL, such as ' or ; by prefixing them with a backslash (\), the backslash itself is also escaped, becoming \\. stripslashes() performs the opposite conversion, removing the prefix slashes from a string.

When entering data into a database, addslashes() should be run on all user-supplied data, and any PHP generated data which may contain special characters. To guarantee safety, simply run addslashes() on every string input to the database, even if it was generated internally by a PHP function. Similarly, be sure to run stripslashes() when pulling data back out from the database.

Non-String Variables

Since PHP automatically determines the type of a variable, you should also check variables which you expect to be integers or other data types. For instance, the int type in SQL does not need to be quoted, but it is still possible for a string in a PHP variable to be inserted into an SQL query in the position an integer would usually take. Consider the example below.

$query = “INSERT INTO customers(customer_number) VALUES($_POST['number'])”;

If a user supplied the value

0); DROP TABLE customers; CREATE TABLE customers(customer_id

then the same kind of attack as before can be mounted. In this case, simply using addslashes() isn't enough: you will prevent the command execution, but the database will still consider this to be an error as the words are not valid in that context. The only way to ensure against this kind of attack is to perform consistent input validation. Make sure that a value you think should be an integer really is. A regular expression that matches any non-integer characters should return false on a PHP string containing only an “integer”. When that string is treated as an integer by SQL, it will therefore not cause any errors or unexpected code execution.

Database Ownership & Permissions

There are other precautions you may be able to take to prevent some of the more serious SQL injection attacks. One such course of action is to implement access control on the database. Many database packages support the concept of users, and it should be possible to set an owner, with full permissions to modify anything within the database, and other users which may only connect and issue SELECT or INSERT queries, thus preserving any data already entered against DELETE or DROP commands. The specifics of achieving such protection will depend on the database system you're using, and consulting the documentation or user manual should reveal how to implement access control.

The user designated as the database owner should never be used to connect to the database from a PHP script; owner privileges should be used on consoles or web admin interfaces such as phpmysqladmin. If a script requires the DELETE or UPDATE commands, it should ideally use a separate user account to the standard account, so that the standard account can only add data using INSERT, and retrieve data using SELECT. This separation of permissions prevents attacks by limiting the effectiveness of any one SQL injection avenue. If, by poor or forgetful programming, a user can inject SQL into one script, they will gain only SELECT / INSERT permissions, or only UPDATE / DELETE permissions, and never sufficient permissions to drop entire tables or modify the table structure using the ALTER command.

File Permissions

Data in a database system must be stored somehow on disk. The database system itself is responsible for exactly how the data is stored, but usually there will be a data/ directory under which the database keeps its files. On a shared hosting system, or a system which allows users some access to the filesystem, it is essential to reduce the permissions on this file to a bare minimum; only the system user under which the database process itself runs should have read or write access to the data files. The web server does not need access as it will communicate with the database system for its data, instead of accessing the files directly.

Making Database Connections

PHP usually connects to the database management system through a TCP socket or a local domain socket (on UNIX/Linux). Where possible, you should prevent connections to this socket from IP addresses or processes other than the web server, and any other process which needs access to the data (for example, if you have internal order processing software which does not run through the web server). If the web server and the database server are on the same computer, and no other services are running which may be exploited to provide a database connection, it should be sufficient to allow only the local host (given by the hostname localhost or the IP address 127.0.0.1) access to the TCP port on which the database manager is listening. If the web server and database server are on different machines, the IP of the web server should be specified explicitly. In short, limit the access to the database as much as possible without breaking anything that needs access to it. This should help to ensure that the only access channel is via your PHP scripts, and those have been written securely enough to check for unexpected or unauthorised data and reject it before it reaches the database.

Database Passwords In Scripts

Finally, a word on database passwords. Each database user should be assigned a password, and your scripts will need this password in order to initiate a connection to the database. Ideally, scripts containing configuration data such as the database username and password should be stored outside of the web server's document root. This prevents a casual attacker retrieving the plain text of the configuration file and obtaining the database password.

Other methods to consider are to use a .php extension for the file, instead of the commonly used .inc extension, for included files. The .php extension ensures that the file is passed through PHP before output is sent to the user's browser, and so it is possible to prevent display of data within the file simply by not echoing it!

.htaccess files provide a third method of protecting against password grabbing. If you deny web access to files whose names begin with .databaseconfig, for instance, a user cannot easily obtain the file through the web server directly.

Of course, a user may still be able to exploit file access security vulnerabilities in scripts to obtain, or even to change, the contents of the file.

How to check for PHP vulnerabilities

The best way to check whether your web site & applications are vulnerable to PHP security attacks is by using a Web Vulnerability Scanner. A Web Vulnerability Scanner crawls your entire website and automatically checks for vulnerabilities to PHP attacks. It will indicate which scripts are vulnerable so that you can fix the vulnerability easily. Besides PHP security vulnerabilities, a web application scanner will also check for SQL injection, Cross site scripting & other web vulnerabilities.

The Acunetix Web Vulnerability Scanner scans for SQL injection, Cross site scripting, Google hacking and many more vulnerabilities. For more information & a trial download click here.

Check if your website is vulnerable to attack with Acunetix Web Vulnerability Scanner

Acunetix Web Vulnerability Scanner ensures website security by automatically checking for SQL injection, Cross site scripting and other vulnerabilities. It checks password strength on authentication pages and automatically audits shopping carts, forms, dynamic content and other web applications. As the scan is being completed, the software produces detailed reports that pinpoint where vulnerabilities exist. Take a product tour or download the evaluation version today!

Scanning for XSS vulnerabilities with Acunetix WVS Free Edition!

To check whether your website has cross site scripting vulnerabilities, download the Free Edition from http://www.acunetix.com/cross-site-scripting/scanner.htm. This version will scan any website / web application for XSS vulnerabilities and it will also reveal all the essential information related to it, such as the vulnerability location and remediation techniques. Scanning for XSS is normally a quick exercise (depending on the size of the web-site).

Later In The Series

The next article in this series looks at PHP and file access in detail.

Source: Website security