Search This Blog

Saturday, December 16, 2006

Free Web Hosting

Can you really get FREE web hosting?

Yes, there are hundreds of free hosting web sites, as far as not having to pay any money to have your website hosted. Generally they either cost you in time, web hosting restrictions, or modifying your free web pages by adding pop ups, banners, or other adverts. When looking for free web hosting (especially on search engines), you should beware that there are also a large number of commercial web hosts that claim to offer free hosting services, but those often have a catch, such as paying an excessive amount for a domain name or other service, and therefore aren't really free. The free free hosting guide below will give you some tips for finding the right free web hosting company for you.

How do the free web hosts make money?

The free website hosts often make money in other ways, such as putting banners, pop ups, or pop unders ads on your free web pages. Some free web hosting companies do not put ads on your site, but require you as the webmaster to click on banners in their control panel or sign up process, or just display banners in the file manager in hopes you will click them. Some lure visitors with free hosting in hopes you will upgrade and pay for advanced features. A few send you occasional emails with ads, or may even sell your email address. A new method that is becoming popular is requiring a certain number of "quality" forum posting, usually as a means of getting free content for them and thereby being able to display more ads to their website visitors.

Are free web hosts reliable?

Generally no, although there are a few exceptions. If the free host is making money from banner ads or other revenue sources directly from the free hosting service, then they likely will stay in business, provided someone doesn't abuse their web hosting server with spam, hacking, etc., as often happens to new free web hosting companies with liberal signup policies. If the freehost accepts just anyone, especially with an automated instant activation and it offers features such as PHP or CGI, then some users invariably try to find ways to abuse it, which can cause the free server to have a lot of downtime or the free web server to be slow. It is best if you choose a very selective free hoster which only accepts quality sites (assuming you have one).

Uses for free webspace

Free web hosting is not recommended for businesses unless you can get domain hosting from an ad-free host that is very selective. Other reasons for using free hosting websites would be to learn the basics of website hosting, have a personal website with pictures of your family or whatever, a doorway page to another web site of yours, or to try scripts you have developed on different web hosting environments.

How to find the right free web hosting site

The best place to search for free web hosting is on a free webspace directory website (i.e. a web site which specializes in listing only free web hosting providers). There are some which add new free hosts pretty much every week (and if it is updated often, has usually had to delete about as many). There are also many which almost never update their web site, and a huge percent of their links and info are outdated. Unfortunately that includes most of the directories that were the best several years ago. The problem is free hosts change so often, and most fold up in less than a year (often even after only a day or two), that it is hard to keep such a free hosting directory up-to-date.

For a more selective list of the best free hosts, there are also these free webspace hosting directories:
Free WebHosts (
Best Free Webspace (
Free Hosting (
Free Webspace (
Other (usually less useful) resources include subcategories of freebies sites, search engines and directories, and forums. Your ISP might also supply you with free webhosting.

Hints for finding the best free web hosting service

Generally it is best not to choose a free hosting package with more features than you need, and also check to see if the company somehow receives revenue from the free hosting itself to keep it in business. As already mentioned, it is best to try to get accepted to a more selective free host if possible. Look at other sites hosted there to see what kind of ads are on your site, and the server speed (keep in mind newer hosts will be faster at first). Read the Terms of Service (TOS) and host features to make sure it has enough bandwidth for your site, large webspace and file size limit, and any scripting options you might need. Read free webspace reviews and ratings by other users on free hosting directories. If you don't have your own domain name, you might want to use a free URL forwarding service so you can change your site's host if needed.

Recommended free web hosts

It would be awfully hard to recommend any free web space host and someone not like it, as different people need different web hosting features and have different priorities, and the webhosting quality may change over time. Also some people want free domain hosting (you own the domain), and others might not be able to buy a domain name. Here are some of the most recommended free web hosts, and their main features.

You could read the original article in this link:

More Lists of Free Web Hosts

Please visit my other blogs too: for information and for internet marketing. Thanks !!

Thursday, December 07, 2006

Manipulating Data in TEXT Type Columns

Its always tricky to do string manipulation in TEXT datatype fields.

For many SQL Server 2000 DBAs working with text columns in T-SQL is no different than any other datatype. But there are some tricks when you work with very large values that you need to know. Leo Peysakhovich brings us some advanced queries that you might need if you work with large XML documents as he does.

Please visit this link to know more about how to manipulate data in Text type columns.

Please visit my other blogs too: for information and for internet marketing. Thanks !!

Friday, October 27, 2006

Contract Coding

Thanks to Damon Armstrong for this article.

Contracts and scope definition

You’re called a "contractor", right? As such, it seems that you should have an actual contract, for the sake of propriety if nothing else. The reality is, however, that most contractors start contracting without ever actually making an official contract, or even fully defining the scope and deliverables for the project they are undertaking (guilty as charged!). Clearly, this can lead to serious problems down the line.

Project scope definition

One of the first things you need to do is to define a clear set of objective deliverables for the project. Normally, you define deliverables in a 'scope' document, which outlines the extent of the work you are agreeing to complete. You can also think of the scope document as a high-level 'design' document on which your client signs off – just like a contract.

When you create your scope document, you are essentially defining what needs to be done in order for the project to be considered 'complete'. Outline each item that you are going to complete, and attempt to be specific about what 'complete' means. If you are building a website, make sure to identify all of the pages in the site and the specific functionality for each page. The more detailed you are, the better off you are if you have a disagreement with your client over part of the project.

Also, remember to include a statement that indicates you are only responsible for items specifically designated as deliverables in the scope document. This helps protect you from any assumptions the client makes, but which he forgets to tell you about. For example, if you deliver an e-commerce site and your client comes back and says that you are responsible for all the product data entry, you can point them to this clause in the scope document and tell them that, since the data entry was not a deliverable, you are not responsible for its completion. If you want to be doubly-protected, you should also include a section outlining specific project items you are not going to deliver. This may seem a bit redundant, but specifics are always better than generalities.

Alongside the scope document, you also need to prepare a formal contractual agreement for the work you are about to undertake.

Writing up the contract

There are many tangible benefits to be gained from establishing a written contractual agreement. Hashing out a contract forces you and your client to outline all of the terms of your engagement, and it legally binds both to the fulfillment of those terms (at least, it does if you do it right). If you complete all the work outlined in the contract and your client fails to remunerate you in the method specified, then you have a means by which to pursue payment in court. Of course, it also means that you need to do a good job of outlining specifics, because ambiguity can only harm you.

So, the question is: what do you need to have in a contract? Here are some of the things that you will definitely want to consider:

Project deliverables

As described in the previous section, you and the client should formally agree and sign off on a scope and deliverable document, which can then be explicitly referred to in the contract.

Project timelines

You need to define how both sides handle time-sensitive situations and deadlines. Is there a deadline for finishing the project? Is there a penalty for finishing the project late? A bonus for finishing the project early? And, perhaps most importantly, what happens when the project timeline shifts because the client fails to provide a time-sensitive deliverable?

I’ve seen too many clients say they need something in a month, wait three weeks to give you what you need to start on the project, and expect you to have it done in a week. Always include something about delaying the project timeline in response to delays for items on which you depend, but the client fails to provide. You will be amazed at how quickly you can coax your client into getting things done when they know there are defined consequences for delays. You may also want to specify how you are going to bill clients during a period when you are waiting on a dependency.


You need to define all aspects of how you will be compensated for your work on the project. Are you paid based on an hourly rate? Are there limitations on the number of hours you can bill in a given time frame? Is it a fixed-bid project? How, when, and to whom do you submit invoices? How much time does the client have to pay an invoice once it is received? What happens if the client fails to pay an invoice?

Travel time / expenses

If you need to travel out to the client for any reason, can you bill your travel time? Can you expense the mileage? And how do you handle other ad hoc expenses that come up during the course of the project?

Prerequisite needs

If there is something that you know you will need, in order to be successful on the project, make sure you include it in the contract. For example, if you expect to have VPN access to their network, or need a specific piece of hardware or software, then outline it specifically. This helps you avoid assumptions about your environment that may hinder your ability to complete a job on time and on budget.

Third-party software / licenses

If you know that you need to use any third-party software, you should outline who is going to pay for those tools, and who will own the licenses for them when the engagement is over. Even if you are not planning on using third-party software, it may help to have a clause in your contract that states that the client is responsible for paying for any third-party tools that are deemed necessary for the project. And then you will need to define how 'deemed necessary' is determined, because you do not want it to be ambiguous.


If you want to keep your sanity, be sure to define the process by which the client contacts you with questions, concerns or additional information about a project. If you do not want the client calling you during the day while you’re at your real job, then put it in the contract. To drive home the point, you can even put in a clause that allows you to charge them a premium if they do call you during a restricted time.


Almost every project you work on will have bugs, but you cannot provide indefinite support for an application. Make sure that you outline, in your contract, how you plan on handling maintenance with your client. This is a touchy subject, because clients quickly find issues with an application once it is deployed, and usually want those fixed as part of the original cost of the project. Normally, you will want to include the cost of a few maintenance hours in the original cost of the project, so you can stick around and fix problems for a little while after deployment. But you also want to protect yourself from a ceaseless stream of minor requests like changing the text of a label from this to that, moving a textbox from here to there, using a different shade of blue as a background, etc.


Cover your butt. I’ve heard of clients going after developers for a myriad of reasons, like lost employee productivity or even lost revenue due to glitches in software. There are a lot of things that can happen on a project and you need to have a broad-sweeping statement that attempts to cover the unforeseen. The more specific you can be about the things that can go wrong, the better off you are. For example, if you are building a billing system, then you should probably have a clause indemnifying you of the cost of any lost revenue due to billing errors, system down time, and so on.

Anything is better than nothing when it comes to contracts, so writing a contract yourself is a far better option than having nothing to go on at all. But you should be keenly aware of the fact that a professional contract lawyer is far more qualified than you are at writing contracts, so you should seek assistance from one, to help you write a solid contract. Writing contracts is a mysterious journey into the complex art of legal prose, most aptly scribed by a professional who actually understands the ramifications of what they are putting to ink. You may think your contract writing skills are akin to the works of Thoreau, but when you come up against a really good lawyer, you will quickly find they are more on par with Curious George gets his Ass Handed to Him In Court. Normally, you can ask a lawyer to write up a fairly generic contract that you can use for most of your engagements, so it’s well worth the investment.

Layers of Protection

A wise man also once told me that a contract is just an invitation to a fight. If you get to the point where you need to enforce something in your contract, then you need the help of the court system. And that course of action normally requires spending a lot of time, energy, and money on the legal process. So it’s also good to avoid a protracted legal battle by protecting yourself in other ways.

Full up-front payments

One way to make sure you get paid for your work is to get paid before you start working! However, negotiating a full up-front payment is a fairly difficult task, because it creates a risk reversal for the client. Instead of you taking on the risk of not getting paid for your work, the client takes on the risk of not getting the work for which they have paid. Some companies are willing to take that risk if you have a good reputation and they trust you. Some companies are so desperate for help that they will agree to anything. You should always check into a desperate client to see the source of their desperation. It may be that they need someone quickly and are willing to pay up-front to secure a qualified contractor. It may be that the project is a complete mess and they cannot get anyone in their right mind to touch it. Knowing their reasoning can help you determine if a project is really worth your time and effort.

One downside to accepting a full up-front payment is that some clients may feel cheated when you declare a project complete while there are lingering issues which the client feels you should resolve. But it’s far better, in this situation, to have the money in hand, because this is the point in time when some clients would refuse to pay you until you fixed those issues. It also illustrates the need for appropriate project scope definition, outlining what deliverables are covered in your fees and how to handle maintenance after finishing the project.

Partial up-front payments

A more common alternative to a full up-front payment is a partial up-front payment. This reduces the overall feeling of risk for a client, because they are not paying for everything up front. Most businesses understand the importance of an initial investment in a project, as a way of demonstrating a commitment, both to the project itself and to maintaining a healthy client-contractor relationship. When defining partial payments, you need to outline the specifics of how such payments are to be made. Normally, the client pays an up-front amount and then makes additional payments as you provide them with project deliverables. This creates a cycle in which you finish a part of the project and are paid for that part. It also sets up an easier environment for billing your client for additional work that arises in the middle of a project. If they want to add something to the project, or keep you on longer for maintenance purposes, then you can simply schedule additional payments for the additional work.

Maintenance and buckets of time

Most contractors accept a project, thinking that they are going to write some software, send it to the client, get a paycheck, and be done with it forever. But that is rarely the case. Business processes change over time and, when they do, your client may need to update your application to account for that change. And guess who they are going to call! Sometimes the requested changes are so significant that you can simply treat it as a completely new project. Often, however, you will get a client who has lots of little changes that come up sporadically. One tactic for dealing with this situation is to establish a maintenance time-bucket.

In this situation, the client pays you for a certain number of maintenance hours (say 5 hours). They can call you and request any changes that they want and you can deduct the time you spend on the fix from the time-bucket. When it starts getting low, the client simply refills the time-bucket by paying you for another set of hours.

Another suggestion is to establish an initial maintenance time-bucket for your project, and explain the concept to your client. Tell them that you will be available for this many hours after the project, and that, if they want to retain you for minor fixes, they can refill that bucket as needed. It’s a seamless way to move from project completion into a maintenance billing cycle.

Binary-only deployments

You may not be able to protect yourself by negotiating an up-front payment on each project, but you can still protect your development efforts with other strategies. When you compile a project, your easily readable source code is reduced to an economically indecipherable jumble of machine language. So, one option for protecting your code is to give your clients only the compiled form of your work. Binary deployments allow the client to interact with your work and even deploy it into a production environment, but also allow you to retain a bargaining chip if the client fails to pay you for the work you have completed. Most applications have bugs and eventually need changes, so the source code is important for the client to ultimately acquire.

Licensing and trial periods

Although source code is important, some clients can still find ways to abuse a binary-only deployment scenario. I spoke with one contractor who deployed his solution, only to have the client turn around and demand that he add additional reporting functionality to the application before they would pay him. It put him in a bad situation because, if he walked away, the client could continue to use his application as it was, or until they found someone else to rebuild the system, using his work as an example. You may even encounter some clients who lose their sense of urgency and, once they have a working solution in place, drag their feet when it comes to paying you. So how do you protect your binary-only deployment? By programming your application to quit working after a certain trial period!

Adding trial-period support into your application can be as simple or complex as you choose. You could add a full-blown licensing system to your application that checks a license key to determine if your software has been paid for or should run in trial mode. You could throw if-then statements around important sections of code that disable the application after a certain date. If you choose the latter, which is the cheapest and least difficult route, I highly recommend centralizing your date-checking logic so you can easily disable the date checks. If you have scattered checks throughout your application, then you are bound to miss one of them and incur the wrath of an angry client when the application they paid for suddenly stops working.

You may also want to consider targeting specific pieces of functionality instead of the application as a whole. For example, if you build a web application that has a customer-facing front-end and a back-end system that the business uses to manage the website, consider disabling the back-end system before disabling the front-end. Disabling the back-end system does not affect the customer’s ability to make purchases, but it does affect the client’s ability to process those orders. When you re-enable the back-end system, the client can process the backlog of customer purchases without losing any sales during the downtime. It’s an effective means of getting the client’s attention, without making too much of a negative impact on the client’s bottom line.

Never use trial period protection as a last-ditch act of revenge against a client who fails to pay you for your services. You will damage your reputation, and you may leave yourself legally liable for losses the client may incur as a result of their application suddenly ceasing to function. Always outline the trial period in your contract to let the customer know, in advance, that a trial period exists and exactly when the trial period expires. You should also refrain from displaying inappropriate comments to the client when your trial period expires. It may seem like fun to write a nasty message about how they should have paid you, but simply informing the user that the application is unavailable is much more advisable. Remember, if you mess up and accidentally display the expired trial message, it’s much easier to explain away an 'Application Unavailable' message than an expletive-laced tirade.

Informing your clients of all protective measures

You should expressly outline any measures you plan to take to protect your work (such as binary deployments, trial periods etc) in the contract, and discuss them with your client at the beginning of a project. By outlining the specifics, you help protect yourself from legal liability in the event that the client feels your protective measures damaged their business in some way. If you suddenly inform the client of such measures at the end of the project, then you are much more likely to get yourself into trouble and upset your client.

In conclusion

Remember, the most important thing you can do as a contractor is to establish an actual contract with your client. Effective communication is essential for a project to run smoothly. The process of hashing out a contract forces you to communicate openly with your client about topics most people seem too squeamish to bring up in normal conversation. Money and failure seem to be taboo subjects, but the finality of a contract makes it easy to discuss delicate financial matters and what happens if anyone fails to live up to their side of the bargain. Every client you encounter is going to be a bit different, so use your judgment in determining the best approach for payment options and source code protection.

You could read the original article by Damon Armstrong, in this link:
Thanks to Damon Armstrong for providing this wonderful insight on this subject.

Check Your SQL Server Identity

We expect developers to be able to create stored procedures, write moderately complex SQL statements, and even the occasional trigger where needed. One question I like to ask goes something like this:

"Let's take a scenario using SQL Server 2000 where I'll be passing in two variables (firstname and lastname) to a stored procedure. That procedure should insert them into a table called TEST that has columns also called firstname and lastname. Table TEST has a primary key column named ContactID which is an integer and is also marked as an identity column. How would I obtain and return the primary key for the row just inserted?"

Stop for a moment and see if you know the answer. Do you know how to create the stored procedure? Obtain the value? Return it to the calling application?

A fair question to ask me is - why is this important? For me, it's a fundamental test to see if someone has worked with data in anything beyond a trivial way. Take the standard order/order detail scenario - how do you insert the details if you don't have the primary key of the order? And while you may have had the luck (good?) to work on a system with natural keys, not every system uses them and identities are the most common way of solving key generation in SQL. More importantly, if you ever do work on a system where identities are used and you rely on @@Identity, you could get some very unusual results at some point in the future when someone adds an auditing trigger. It's not a deal breaker question, but it's an interesting one to lead them into a conversation about dealing with related tables.

I get a variety of answers and most of them are shall we say less than optimum. Almost everyone figures how to insert the values and knows to use either an output or return value, but almost everyone trips on the identity portion.

Wrong Answer #1 - Select max(contactid) from Test. This is wrong because it assumes that no one else will be inserting a row. I suppose you could make it work if you used the right isolation level, but doing that will most likely reduce your concurrency. It's also doing more than you need to.

Wrong Answer #2 - Select top 1 contactid from test order by contactid desc. This is wrong for the same reasons described above.

Wrong Answer #3 - Select the row back by querying on other data you inserted into the table, essentially saying that you inserted an alternative primary key made of one or more columns. This would work if your data supported it and guaranteed that those values were indeed unique. Still not a good idea.

Wrong Answer #4 - In this one they almost get it right. They suggest using @@Identity which will work of course (with caveats), but when I ask them if they are any concerns with this technique, I usually get one of the following:

- No, there are no concerns

- You have to query it quickly because it is a database wide setting and you have to get the value before someone else inserts a row into any table in the database.

- Yes, it retrieves the last identity value for the session which is usually the value you want, but could be incorrect if you had a trigger on TEST which inserted rows into another table that also had an identity column. In that case you'd get the identity value from that table instead of TEST (Note: this correctly describes the behavior @@identity exhibits).

Right Answer - Use Scope_Identity() because it's SQL 2000, use @@Identity in SQL 7, and return the result as an output parameter (return value typically reserved for error conditions). Using @@Identity represents a possible bug in the future if auditing were deployed and it used an identity column as well.

Now let's run a couple tests to prove that the right answer is really correct:

create database IdentityTest

use identitytest
create table TEST (ContactID int not null identity (1, 1), firstname varchar(100) null, lastname varchar(100) null)

insert into TEST Default Values
select @@Identity

This will return the value 1. Repeating it will return 2.

insert into TEST Default Values
select Scope_Identity()

This will return a value of 3.

Now let's start by proving that @@Identity can cause strange behavior. We'll create a history table first that has a new identity column, then we'll add an insert trigger to TEST.

create table TESTHISTORY (HistoryID int not null identity (1, 1), ContactID int not null, firstname varchar(100) null, lastname varchar(100) null)

create trigger i_TEST on dbo.TEST for insert as

set nocount on

insert into TESTHISTORY (ContactID, FirstName, LastName) select ContactID, FirstName, LastName from Inserted

Now let's test what happens:

insert into TEST Default Values
select @@Identity

Returns a value of 1. Inspecting TEST shows that the last row we inserted had a value of 4, the only row in TESTHISTORY has a historyid = 1.

insert into TEST Default Values
select Scope_Identity()

( Corrected as correctly pointed out by adk in the comments)
Returns a value of 5.
Inspecting TEST confirms this, and confirms we inserted a second row into TESTHISTORY. Now let's start testing what happens if someone else inserts a row into TEST while we're busily working away in our stored procedure. Using the existing connection, we execute the first part:

insert into TEST Default Values

If we check the table we see that we just inserted row 6. Now open a second connection and execute the same statement:

insert into TEST Default Values

Check the table reveals we just inserted row 7. Now go back to the original connection. We start with someone we know should return the "wrong" result and it does, the value 3.

select @@Identity

Now let's try scope_identity(). If all went well, it should return 6, not 7!

select Scope_Identity()

And it does, supporting the Right Answer detailed above. I know this is SQL trivia, the kind of stuff I think you shouldn't have to delve into, but if you're going to use the platform you have to know how it works. Take this back and quiz your developers, you'll be treating them to some professional development and you may save yourself a large headache one day too.

You could read the original article in this link:
Thanks to Andy Warren for this article

Wednesday, October 11, 2006

Using Profiler to Identify Poorly Performing SQL Server Queries

Identifying Long Running Queries is First Step

At this step in the SQL Server performance audit, you should have identified all the "easy" performance fixes. Now it is time to get your hands a little dirtier and identify queries (including stored procedures) than run longer than they should, and use up more than their fare share of SQL Server resources.

Slow running queries are ones that take too long to run. So how long is too long? That is a decision you have to make. Generally speaking, I use a cutoff of 5 seconds. In other words, any query running 5 seconds or less is generally fast enough, while queries that take longer than 5 seconds to run are long running. This is an arbitrary decision you have to make. In the company where I work, the report writers, who are the ones who write most of the queries that are run against our databases have a different standard than I have. They only consider a query to be long running if it takes more than 30 seconds to run. So, one of your first steps is to determine what you think a long running query is, and then use this as your standard during this portion of the performance audit.

We don't have unlimited time to tune queries. All we can do is to identify those queries that need the most work, and then work on them. And if we do have time, then we can focus on those queries that are less critical (but still troublesome) to the overall performance of our SQL Servers. Also keep in mind that sometimes, no matter how hard you try to tune a particular query, that there may be little or nothing you can do to improve the performance of a particular query.

Before You Begin

For this part of the performance audit, you will be using the SQL Profiler tool that comes with SQL Server. As this article focuses on how to perform a performance audit, and not on how to use tools, it is assumed that you know how to use SQL Profiler. If you have not used it before, check out the SQL Server Books Online to get you started on the basics of how to use it.

Before you begin using Profiler to capture the query activity in your SQL Servers, keep the following in mind:

  • Don’t run the Profiler on the same server you are monitoring, this can noticeably, negatively affect the server’s performance. Instead, run it on another server or workstation, and collect the data there.

  • When running the Profiler, do not select more data than you need to collect. The more you collect, the more resources are used to collect them, slowing down performance. Only select those events and data columns you really need. I will make recommendation on exactly what to collect shortly.

  • Collect data over a “typical” production time, say over a typical 3-4 hour production period. This may vary, depending on how busy your server is. If you don’t have a “typical” production time, you may have to collect data over several different periods of a typical production day to get all the data you need.

When you use Profiler, you have two options of how to "set it up." You can choose to use the GUI Profiler interface, or if you like, you can use the built-in Profiler system stored procedures. While using the GUI is somewhat easier, using the stored procedures to collect the data incurs slightly less overhead. In this article, we will be using the GUI interface.

What Data to Collect

Profiler allows you to specify which events you want to capture and which data columns from those event to capture. In addition, you can use filters to reduce the incoming data to only what you need for this specific analysis. Here's what I recommend:

Events to Capture

  • Stored Procedures--RPC:Completed

  • TSQL--SQL:BatchCompleted

You may be surprised that only two different events need to be captured: one for capturing stored procedures and one for capturing all other Transact-SQL queries.

Data Columns to Capture

  • Duration (data needs to be grouped by duration)

  • Event Class

  • DatabaseID (If you have more than one database on the server)

  • TextData

  • CPU

  • Writes

  • Reads

  • StartTime (optional)

  • EndTime (optional)

  • ApplicationName (optional)

  • NTUserName (optional)

  • LoginName (optional)

  • SPID

The data you want to actually capture and view includes some that are very important to you, especially duration and TextData; and some that are not so important, but can be useful, such as ApplicationName or NTUserName.

Filters to Use

  • Duration > 1000 milliseconds (1 seconds)

  • Don’t collect system events

  • Collect data by individual database ID, not all databases at once

  • Others, as appropriate

Filters are used to reduce the amount of data collected, and the more filters you use, the more data you can filter out that is not important. Generally, I use three filters, but others can be used, as appropriate to your situation. And of these, the most important is duration. I only want to collect information on those that have enough duration to be of importance to me, as we have already discussed.

Collecting the Data

Depending on the filters you used, and the amount of time you run Profiler to collect the data, and how busy your server is, you may collect a lot of rows of data. While you have several choices, I suggest you configure Profiler to save the data to a file on you local computer (not on the server you are Profiling), and not set a maximum file size. Instead, let the file grow as big as it needs to grow. You may want to watch the growth of this file, in case it gets out of hand. In most cases, if you have used appropriate filters, the size should stay manageable. I recommend using one large file because it is easier to identify long running queries if you do.

As mentioned before, collect your trace file during a typical production period, over a period of 3-4 hours or so. As the data is being collected, it will be sorted for you by duration, with the longest running queries appearing at the bottom of the Profiler window. It can be interesting to watch this window for awhile while you are collecting data. If you like, you can configure Profiler to automatically turn itself off at the appropriate time, or you can do this manually.

Once the time is up and the trace stopped, the Profiler trace is now stored in the memory of the local computer, and on disk. Now you are ready to identify those long running queries.

Analyzing the Data

Guess what, you have already identified all queries that ran during the trace collection that exceed your specified duration, whatever it was. So if you selected a duration of 5 seconds, you will only see those queries that took longer than five seconds to run. By definition, all the queries you have captured need to be tuned. "What! But over 500 queries were captured! That's a lot of work!" It is not as bad as you think. In most cases, many of the queries you have captured are duplicate queries. In other words, you have probably captured the same query over and over again in your trace. So those 500 captured queries may only be 10, or 50, or even 100 distinct queries. On the other hand, there may be only a handful of queries captured (if you are lucky).

Whether you have just a handful, or a lot of slow running queries, you next job is to determine which are the most critical for you to analyze and tune first. This is where you need to set priorities, as you probably don't have enough time to analyze them all.

To prioritize the long running queries, you will probably want to first focus on those that run the longest. But as you do this, keep in mind how often each query is run.

For example, if you know that a particular query is for a report that only runs once a month (and you happened to have captured it when it was running), and this query took 60 second to run, it probably is not as high as a priority to tune as a query that takes 10 seconds to run, but runs 10 times a minute. In other words, you need to balance the length of how long a query takes to run, to how often it runs. With this in mind, you need to identify and prioritize those queries that take the most physical SQL Server resources to run. Once you have done this, then you are ready to analyze and tune them.

Analyze Queries by Viewing Their Execution Plans

To analyze the queries that have captured and prioritized, you will need to move the code to Query Analyzer in order to view the execution plan, so that it can be analyzed. As the focus of this article is on auditing, not analysis, we won't spend the time here to show you how to analyze specific queries.

How you move the code to Query Analyzer for analysis depends on the code. If the code you have captured is Transact-SQL, you can cut and paste it directly into Query Analyzer for analysis. But if the code you have captured is within a stored procedure, you have to do a little more work, because the Profiler does not show what the code in the Stored Procedure looks like, but only shows the name of the stored procedure, along with any parameters that were passed along to it. In this case, to analyze the query in Query Analyzer, you must go to the stored procedure in question, and cut and paste the code to Query Analyzer. Then, assuming there were parameters passed to it, you will have to manually modify the code from the stored procedure so that it will run with the parameters found when it was captured by Profiler.

Now the time-consuming chore begins, and that is the analysis of each query's execution plan to see if there is any way the query can be tuned for better performance. But because you have now identified and prioritized these problematic queries, your time will be much more efficiently spent.

SQL Server Query Performance Audit Checklist

SQL Server Job Checklist

Your Response

Have you identified all long running queries?

Have you prioritized the queries?

Have you reviewed the execution plans of the above prioritized queries?

Enter your results in the table above.

You could find the original article in this link:

Monday, October 09, 2006

Missing Icons and Bad Links or Invisible Links or Script Error in MSDN

I recently had a problem with MSDN Library files, where most of the links produced a script error and the top links, which point to more resources or examples were found to be missing.
I tried re-installing and the same problem occurred.
And then I found the solution to it in the following link.
Missing Icons and Bad Links in MSDN - MSDN Forums:

Or else you could check out the solution down below in the words of the solution provider, Bri.

I have run into this "missing link" problem as well in MSDN versions Oct 2001 and earlier. I recently worked with MS Developer Support to alleviate this issue. Over the course of 2 days we found the proverbial "needle in a haystack" fix. All you have to do is change the following registry setting (Put the following text in a .reg file and run it if you wish):

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\ActiveX Compatibility\{ADB880A6-D8FF-11CF-9377-00AA003B7A11}]
"Compatibility Flags"=dword:00000000

It's always amazing to me how small -- and invisible -- a fix is to problems like these!

For a step-by-step explanation:
1. Open your regsitry editor. Start --> Run --> Type in regedit
2. Go to the following Key
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\ActiveX Compatibility\{ADB880A6-D8FF-11CF-9377-00AA003B7A11}
3. Right Click on "Compatibility Flags" and select "Modify"
4. Type 0 in Value Data (zero)

Hope this was useful.

I don't know why this problem occurred though.
Maybe it maybe due to the fact that I had installed an IE update or I had installed and removed beta version of Visual Studio .net

Please visit my other blogs too: for information and for internet marketing. Thanks !!

Friday, August 25, 2006

Efficient Method for Paging Through Large Result Sets in MS SQL Server 2000

It is amazing the amount of cycles, hardware and brain ware, go into paging results efficiently. Recently Scott Mitchell authored an article titled Efficiently Paging Through Large Result Sets in SQL Server 2000 that looked at a stored procedure that returned a particular "page" of data from a table. After examining Scott's approach, I saw some potential improvements in his method. (If you haven't yet perused Scott's technique, take a moment to do so before continuing here, as this article builds upon Scott's example.)

Scott's approach made use of a table variable to generate a synthetic ID to act as a row counter. Every time a page is requested, all of the data in the table being paged must be read and inserted into the table variable in order to generate the synthetic ID, at which point a SELECTROWCOUNT to greatly reduce the number of records that must be read and inserted into the table variable. statement returns just those records whose IDs fall within the desired range. While Scott's method is faster than blindly returning all of the records, his approach can be greatly improved by using

In this article we'll look at two ways to improve Scott's method. The first approach uses a table variable (just like Scott's), but utilizes the SET ROWCOUNT command to reduce the number of records read and inserted into the table variable. The second technique more cleverly uses SET ROWCOUNT to provide an even more efficient approach than the first. Read on to learn more!

Using ROWCOUNT to Optimize Paging
The first step we can take to optimize paging is to use SET ROWCOUNT prior to filling our table variable. SET options alter the current sessions handling of specific behavior; SET ROWCOUNT tells SQL Server to stop processing query results after it has processed the specified number of rows. For more background on SET ROWCOUNT, refer to Retrieving the First N Records from a SQL Query.

This particular stored procedure example was created by 4Guys reader Dave Griffiths.

CREATE PROCEDURE usp_PagedResults_New_Rowcount
@startRowIndex int,
@maximumRows int

EmployeeID int

DECLARE @maxRow int

-- A check can be added to make sure @startRowIndex isn't > count(1)
-- from employees before doing any actual work unless it is guaranteed
-- the caller won't do that

SET @maxRow = (@startRowIndex + @maximumRows) - 1


INSERT INTO @TempItems (EmployeeID)
FROM Employees

SET ROWCOUNT @maximumRows

SELECT e.*, d.[Name] as DepartmentName
FROM @TempItems t
INNER JOIN Employees e ON
e.EmployeeID = t.EmployeeID
INNER JOIN Departments d ON
d.DepartmentID = e.DepartmentID
WHERE ID >= @startRowIndex



As you can see, this stored procedure uses SET ROWCOUNT prior to filling the table variable with what is already known will be the last row needed to satisfy the current page. SQL Server stops filling the table once it processes this number of rows, minimizing the amount of data pulled into the table variable. This method rocks for the first several pages and only begins to hit SQL Server resources as we get incrementally deeper into our pages. Last but very important, SET ROWCOUNT 0. This turns off row limitation and puts the current session in the default behavior mode in case the caller is doing anything else interesting with the same connection that may require more rows returned.

Taking Advantage of the SQL Server Optimizer
An optimizer trick that can also be used in this scenario is when a single variable is set to a potential list of values, it will get assigned the value of the last item in the list. For example, the following SQL script simply creates and fills a table variable with 100 records (1 through 100), then selects the value of the val column into a local variable from the entire table using two different sorts:

val int

DECLARE @i int, @cnt int, @res int
SELECT @i = 0, @cnt = 100
WHILE @i < @cnt
SELECT @i = @i + 1
SELECT @res = val FROM @tmp ORDER BY val ASC
SELECT @res [Value], 'ASC Sort' [Sort]
SELECT @res = val FROM @tmp ORDER BY val DESC
SELECT @res [Value], 'DESC Sort' [Sort]

The results from these selects follow:

Value Sort
----- ----
100 ASC Sort

Value Sort
----- ----
1 DESC Sort

While one might think that SQL Server will need to read every record from the @tmp table and assign the val column to @res for each such record, looking at the query plan for the statement it becomes obvious that the SQL Server optimizer knows that it will only ever need a single row to complete the query and is able to just read in that particular record. Examining this operation in SQL Profiler you'll find that the optimizer is able to get to the end result in only six reads (as opposed to the 100+ reads that would be necessary if it was reading every single record in the table). In short, when presented with such a query SQL Server doesn't actually get all of the records from the table, one at a time, and assign them to the local variable. Rather, it searches just for the last record in the query and assigns that result to the variable.

So, how does this little trick help in the problem of paging large result sets? If this knowledge is combined with SET ROWCOUNT, large result sets can be efficiently paged without the need for temporary tables or table variables! Here is another, more efficient version of Scott and David's stored procedures:

CREATE  PROCEDURE [dbo].[usp_PageResults_NAI]
@startRowIndex int,
@maximumRows int

DECLARE @first_id int, @startRow int

-- A check can be added to make sure @startRowIndex isn't > count(1)
-- from employees before doing any actual work unless it is guaranteed
-- the caller won't do that

-- Get the first employeeID for our page of records
SET ROWCOUNT @startRowIndex
SELECT @first_id = employeeID FROM employees ORDER BY employeeid

-- Now, set the row count to MaximumRows and get
-- all records >= @first_id
SET ROWCOUNT @maximumRows

SELECT e.*, as DepartmentName
FROM employees e
INNER JOIN Departments D ON
e.DepartmentID = d.DepartmentID
WHERE employeeid >= @first_id
ORDER BY e.EmployeeID



Using optimizer knowledge and SET ROWCOUNT, the first EmployeeID in the page that is requested is stored in a local variable for a starting point. Next, SET ROWCOUNT to the maximum number of records that is requested in @maximumRows. This allows paging the result set in a much more efficient manner. Using this method also takes advantage of pre-existing indexes on the table as it goes directly to the base table and not to a locally created table.

Using an even more highly unscientific comparison method than Scott's, let's see the results:

Rows in TablePage # CPUReadsDuration
Scott's Approach
(Table Variable
David's Approach
Table Variable
Greg's Approach 50,00010240

Impressed? If your application is like 99.99% of application using paging and you know no one will ever make it to the 1000th page, you might not think using knowledge of the optimizer is not very important, but if you talk to just about anyone who has a highly loaded database, they will tell you space and blocking in tempdb is a problem. (Anytime you write to a temporary table or table variable, you're working with tempdb.) Anything that can be done to minimize the use of tempdb should be used – it will speed up your application. Also, the reduction in IO through a 30% reduction in reads over using SET ROWCOUNT alone for the first page alone is significant.

As you can see, paging can be greatly improved with the use of SET ROWCOUNT and perhaps a little knowledge of the optimizer. SET ROWCOUNT can be used with a table variable and allow different sorting parameters easily. Additionally, we could still allow ordering of the pages with different sorting options through the use of dynamic SQL (let us know if you'd like to see how to do these things in another article) and still use the optimizer to our advantage, but this can get very complex in the case of ties.

Happy Programming!

Read the original article here..
Thank you 4GuysFromRolla... You guys rock...

Please visit my other blogs too: for information and for internet marketing. Thanks !!

Monday, July 31, 2006

ASP Tips to Improve Performance and Style

Performance is a feature. You need to design for performance up front, or you get to rewrite your application later on. That said, what are some good strategies for maximizing the performance of your Active Server Pages (ASP) application?

This article presents tips for optimizing ASP applications and Visual Basic® Scripting Edition (VBScript). Many traps and pitfalls are discussed. The suggestions listed in this article have been tested on and other sites, and work very well. This article assumes that you have a basic understanding of ASP development, including VBScript and/or JScript, ASP Applications, ASP Sessions, and the other ASP intrinsic objects (Request, Response, and Server).

Often, ASP performance depends on much more than the ASP code itself. Rather than cover all wisdom in one article, we list performance-related resources at the end. These links cover both ASP and non-ASP topics, including ActiveX® Data Objects (ADO), Component Object Model (COM), databases, and Internet Information Server (IIS) configuration. These are some of our favorite links-be sure to give them a look.

  1. Cache Frequently-Used Data on the Web Server
  2. Cache Frequently-Used Data in the Application or Session Objects
  3. Cache Data and HTML on the Web Server's Disks
  4. Avoid Caching Non-Agile Components in the Application or Session Objects
  5. Do Not Cache Database Connections in the Application or Session Objects and Using the Session Object Wisely
  6. Encapsulate Code in COM Objects & Acquire Resources Late, Release Early
  7. Out-of-Process Execution Trades off Performance for Reliability
  8. Option Explicit, Local Variables and Script Variables
  9. Avoid Redimensioning Arrays
  10. Use Response Buffering
  11. Batch Inline Script and Response.Write Statements
  12. Check Connection, Using the OBJECT Tag, TypeLib Declarations
  13. Take Advantage of Your Browser's Validation Abilities & Enable Browser and Proxy Caching
  14. Avoid String Concatenation in Loops
  15. More on Fine Tuning

Len Cardinal
Senior Consultant, Microsoft Consulting Services
George V. Reilly
Microsoft IIS Performance Lead

Adapted from an article by Nancy Cluts
Developer Technology Engineer
Microsoft Corporation

Original Resource:

ASP Tips: Tip 15: More on Fine Tuning

Use Server.Transfer Instead of Response.Redirect Whenever Possible

Response.Redirect tells the browser to request a different page. This function is often used to redirect the user to a log on or error page. Since a redirect forces a new page request, the result is that the browser has to make two round trips to the Web server, and the Web server has to handle an extra request. IIS 5.0 introduces a new function, Server.Transfer, which transfers execution to a different ASP page on the same server. This avoids the extra browser-to-Web-server round trip, resulting in better overall system performance, as well as better response time for the user. Check out New Directions in Redirection, which talks about Server.Transfer and Server.Execute.

Also see Leveraging ASP in IIS 5.0 for a full list of the new features in IIS 5.0 and ASP 3.0.

Use Trailing Slashes in Directory URLs

A related tip is to make sure to use a trailing slash (/) in URLs that point to directories. If you omit the trailing slash, the browser will make a request to the server, only to be told that it's asking for a directory. The browser will then make a second request with the slash appended to the URL, and only then will the server respond with the default document for that directory, or a directory listing if there is no default document and directory browsing has been enabled. Appending the slash cuts out the first, futile round trip. For user-friendliness, you may want to omit the trailing slash in display names.

For example, write:

<a href=""
title="MSDN Web Workshop"></a>

This also applies to URLs pointing to the home page on a Web site: Use the following:
<a href="">, not <a href="">.

Avoid Using Server Variables

Accessing server variables causes your Web site to make a special request to the server and collect all server variables, not just the one that you requested. This is akin to needing to retrieve a specific item in a folder that you have in that musty attic of yours. When you want that one item, you have to go to the attic to get the folder first, before you can access the item. This is the same thing that happens when you request a server variable—the performance hit occurs the first time you request a server variable. Subsequent requests for other server variables do not cause performance hits.

Never access the Request object unqualified (for example, Request("Data")). For items not in Request.Cookies, Request.Form, Request.QueryString, or Request.ClientCertificate, there is an implicit call to Request.ServerVariables. The Request.ServerVariables collection is much slower than the other collections.

Upgrade to the Latest and Greatest

System components are constantly updated and we recommend that you upgrade to the latest and greatest. Best of all would be to upgrade to Windows 2000 (and hence, IIS 5.0, ADO 2.5, MSXML 2.5, Internet Explorer 5.0, VBScript 5.1, and JScript 5.1). IIS 5.0 and ADO 2.5 implement spectacular performance gains on multiprocessor machines. Under Windows 2000, ASP scales nicely to four processors or more, whereas under IIS 4.0, ASP didn't scale well beyond two processors. The more script code and ADO usage in your application, the more performance benefits you'll see after upgrading to Windows 2000.

If you can't upgrade to Windows 2000 just yet, you can upgrade to the latest releases of SQL Server, ADO, VBScript and JScript, MSXML, Internet Explorer, and Windows NT 4 Service Packs. All of them offer improved performance as well as increased reliability.

Tune Your Web Server

There are several IIS tuning parameters that can improve site performance. For example, with IIS 4.0, we've often found that increasing the ASP ProcessorThreadMax parameter (see IIS documentation) can have significant benefits, especially on sites that tend to wait on back-end resources such as databases or other middle-ware products such as screen-scrapers. In IIS 5.0, you may find that turning on ASP Thread Gating is more effective than trying to find an optimal setting for AspProcessorThreadMax, as it is now known.

For good references, see Tuning IIS below.

The optimal configuration settings are going to be determined by (among other factors) your application code, the hardware it runs on, and the client workload. The only way to discover the optimal settings is to run performance tests, which brings us to the next tip.

Do Performance Testing

As we said before, performance is a feature. If you are trying to improve performance on a site, set a performance goal, then make incremental improvements until you reach your goal. Don't save all performance testing for the end of the project. Often, at the end of a project, it's too late to make necessary architectural changes, and you disappoint your customer. Make performance testing a part of your daily testing. Performance testing can be done against individual components, such as ASP pages or COM objects, or on the site as a whole.

Many people test the performance of their Web sites by using a single browser to request pages. This will give you a good feel for the responsiveness of the site, but it will tell you nothing about how well the site performs under load.

Generally, to accurately measure performance, you need a dedicated testing environment. This environment should include hardware that somewhat resembles production hardware in terms of processor speed, number of processors, memory, disk, network configuration, and so on. Next, you need to define your client workload: how many simultaneous users, the frequency of requests they will be making, the types of pages they'll be hitting, and so forth. If you don't have access to realistic usage data from your site, you'll need to guesstimate. Finally, you need a tool that can simulate your anticipated client workloads. Armed with these tools, you can start to answer questions such as "How many servers will I need if I have N simultaneous users?" You can also sniff out bottlenecks and target these for optimization.

Some good Web stress-testing tools are listed below. We highly recommend the Microsoft Web Application Stress (WAS) Toolkit. WAS allows you to record test scripts and then simulate hundreds or thousands of users hitting your Web servers. WAS reports numerous statistics, including requests per second, response time distributions, and error counts. WAS has both a rich-client and a Web-based interface; the Web interface allows you to run tests remotely.

ASP Tips: Tip 14: Avoid String Concatenation in Loops

Many people build a string in a loop like this:

s = "<table>" & vbCrLf
For Each fld in rs.Fields
s = s & " <th>" & fld.Name & "</th> "

While Not rs.EOF
s = s & vbCrLf & " <tr>"
For Each fld in rs.Fields
s = s & " <td>" & fld.Value & "</td> "
s = s & " </tr>"

s = s & vbCrLf & "</table>" & vbCrLf
Response.Write s

There are several problems with this approach. The first is that repeatedly concatenating a string takes quadratic time; less formally, the time that it takes to run this loop is proportional to the square of the number of records times the number of fields. A simpler example should make this clearer.

s = ""
For i = Asc("A") to Asc("Z")
s = s & Chr(i)

On the first iteration, you get a one-character string, "A". On the second iteration, VBScript has to reallocate the string and copy two characters ("AB") into s. On the third iteration, it has to reallocate s again and copy three characters into s. On the Nth (26th) iteration, it has to reallocate and copy N characters into s. That's a total of 1+2+3+...+N which is N*(N+1)/2 copies.

In the recordset example above, if there were 100 records and 5 fields, the inner loop would be executed 100*5 = 500 times and the time taken to do all the copying and reallocation would be proportional to 500*500 = 250,000. That's a lot of copying for a modest-sized recordset.

In this example, the code could be improved by replacing the string concatenation with Response.Write() or inline script (<% = fld.Value %>). If response buffering is turned on (as it should be), this will be fast, as Response.Write just appends the data to the end of the response buffer. No reallocation is involved and it's very efficient.

In the particular case of transforming an ADO recordset into an HTML table, consider using GetRows or GetString.

If you concatenate strings in JScript, it is highly recommended that you use the += operator; that is, use s += "some string", not s = s + "some string".

ASP Tips: Tip 13: Use Your Browser

Take Advantage of Your Browser's Validation Abilities

Modern browsers have advanced support for features such as XML, DHTML, Java applets, and the Remote Data Service. Take advantage of these features whenever you can. All of these technologies can save round trips to the Web server by performing client-side validation as well as data caching. If you are running a smart browser, the browser is capable of doing some validation for you (for example, checking that a credit card has a valid checksum before executing POST). Again, take advantage of this whenever you can. By cutting down on client-server round trips, you'll reduce the stress on the Web server and cut down network traffic (though the initial page sent to the browser is likely to be larger), as well as any back-end resources that the server accesses. Furthermore, the user will not have to fetch new pages as often, improving the experience. This does not relieve you of the need to do server-side validation—you should always do server-side validation as well. This protects against bad data coming from the client for some reason, such as hacking, or browsers that don't run your client-side validation routines.

Much has been made of creating "browser-independent" HTML. This concern often discourages the developer from taking advantage of popular browser features that could benefit performance. For truly high-performance sites that must be concerned about browser "reach," a good strategy is to optimize pages for the popular browsers. Browser features can be easily detected in ASP using the Browser Capabilities Component. Tools such as Microsoft FrontPage can help you design code that works with the browsers and HTML versions you wish to target.

Enable Browser and Proxy Caching

By default, ASP disables caching in browsers and proxies. This makes sense since by nature an ASP page is dynamic with potentially time-sensitive information. If you have a page that doesn't require a refresh on every view, you should enable browser and proxy caching. This allows browsers and proxies to use a "cached" copy of a page for a certain length of time, which you can control. Caching can greatly alleviate load on your server and improve the user experience.

What kind of dynamic pages might be candidates for caching? Some examples are:

  • A weather page, where the weather is only updated every 5 minutes.
  • A home page listing news items or press releases, which are updated twice a day.
  • A mutual fund performance listing, where underlying statistics are only updated every few hours.

Note that with browser or proxy caching, you'll get less hits recorded on your Web server. If you are trying to accurately measure all page views, or post advertising, you may not be happy with browser and proxy caching.

Browser caching is controlled by the HTTP "Expires" header, which is sent by a Web server to a browser. ASP provides two simple mechanisms to send this header. To set the page to expire at a certain number of minutes in the future, set the Response.Expires property. The following example tells the browser that the content expires in 10 minutes:

<% Response.Expires = 10 %>

Setting Response.Expires to a negative number or 0 disables caching. Be sure to use a large negative number, such as -1000 (a little more than a day), to work around mismatches between the clocks on the server and the browsers. A second property, Response.ExpiresAbsolute, allows you set the specific time at which the content will expire:

<% Response.ExpiresAbsolute = #May 31,2001 13:30:15# %>

Rather than using the Response object to set expiration, you can write a <meta> tag into the HTML, usually within the <HEAD> section of the HTML file. Some browsers will respect this directive, although proxies will not.

<meta equiv="Expires" value="May 31,2001 13:30:15">

Finally, you can indicate whether the content is valid for an HTTP proxy to cache, using the Response.CacheControl property. Setting this property to "Public" enables proxies to cache the content.

<% Response.CacheControl = "Public" %>

By default, this property is set to "Private." Note that you should not enable proxy caching for pages that show data specific to a user, as the proxy may serve pages to users that belong to other users.