Born in 780 AD was a man by the name of Abu Abdullah Muhammad Ibn Musa al-Khwarizmi. Al-Khwarizmi was the person who commenced the study of algorithms by introducing algebra to solve problems. Since then, algorithmics has been a study of generating a list of instructions that when followed accomplishs a particular task.
From phase 1, we should have an indication of which tasks are to be performed by the client, the database server, and the application server if need be. In this phase however, all tasks are investigated to make sure every element in the system is productive and concise.
Firstly, the development team will now decide which tier performs which task. This is designed to balance the processing done on each tier so that no teir is overwhelmed. Each tiers tasks will be organised into a list, and then the tasks will be anaylzed separate from the list to be investigated in other tiers. Letís demostrate the ideology focusing on the two tier model.
The reason why the focus is on the two tier model rather then the three tier model is mainly due to the fact that the three tier model releases the workload off the other two tiers by handling their tasks. What is investigated here is how to handle these tasks in such a way that the final result is fast and not costly on the systemís resources.
With the two tier model the algorithms are of more importance. The reason why this is the case is because more work is processed in both tiers. We donít want to overwhelm the server or client by using buggy code. Letís investigate the two tier model and see how algorithmics work.
Client Side Algorithmics
The reason why studying algorithms on the client-side is important is that we donít want the user to wait for too long unnecessarly. Client-side scripting can be used to improve efficiency of the websiteís processing speed, but the ideaology behind the code must be viewed fairly critically.
Sure, the code may work fine and the user may get the correct information back. But, you may have unintentionally wasted the userís time or worse still, you may crash the processing function because of stack overflow (a term used for going past the limited memory in the stack).
Client scripting is used mostly to retrieve data from the userís screen or browser.
The problem is that with knowledge founded in the previous stage based on programming languages, there are special features that may be implemented to save recursive calls to different functions that are created to perform special tasks. Right down what each page is meant to do and design classes to handle these processes.
When dealing with a recursive function, consider the memory stored in the stack. For those who are unsure of stack memory storage, the stack is the memory space that is available for the storage of temporary variables. The stack works in such a way that the variable gets placed and retrieved from the top of the stack. The amount of times a function calls itself is called the functions recursive depth. The lower the recursive depth the better the algorithm.
This will tie up the clientís processing capabilities and possibly overwhelm the client. In most cases an iterative solution may be the key to solve the problem rather than blowing up the stack with temporary variables. This will result in less weight being on the client side.
There are definitely advantages in developing algorithms to analyze your client side scripting. A nice example is to consider checking the userís input to make sure that the information being passed into the SQL query or stored procedure is proper. This may protect the database from possible malicious intent.
The algorithms generated for each of these examples may include the use of regular expressions. In an article developed by the founder of DevArticles.com, Mitchell Harper explained how to use regular expressions in PHP and their importance in string manipulation and validation.
Server Side Algorithmics
Once the algorithms are analyzed it is time for the next phase. This stage analyzes data and how it is structured. The web development team will see how to extract and manipulate data from the database. The logic of the queries must be considered.
For example, the team may design the database in a way so that when a particular data column in a table is altered, a trigger is called to manipulate other data fields. In this case, the time it takes to handle this request is shortened and the correctness is second to none.
One other factor in studying the database is that it must be maintainable and designed in such a way that data is found rapidly as well as having the ability to be altered simply.
All of these problems must be discussed and a database plan should be produced for implementation and testing. Another factor that occurs at this stage in the cycle is the use of server side scripting.
PHP is one of the most common server side scripting languages. Ramus Lerdof put together a bunch of PERL scripts to see who was checking his resume back in 1994. These scripts were then packaged and released under the name Personal Home Page. As you can see the acronym for the Personal Home Page is PHP. PHP is open source and scripts are continuously added to it.
So, issues relating to programming on the server side is an analysis of the optimisation of the coding procedures and a quick analysis of the data structuring. This analysis will be carried onto the next phase in the cycle where the database is designed and tested.
Documentation is very important in this phase. The reason is that when we solve a problem using algorithmic procedures, we donít really want to go back in the next cycle and find the problem re-occuring. It would be like trying to re-invent the wheel.