How to Perform a SQL Server Performance Audit - SQL Server Configuration Performance Checklist: Part 2 (Page 6 of 9 )
Index Create Memory (KB)
The index create memory setting determines how much memory can be used by SQL Server for index creating sorts. The default value of "0" tells SQL Server to automatically determine the ideal value. In almost all cases, SQL Server will configure the amount of memory optimally.
But in some unusual cases, especially with very large tables, it is possible for SQL Server to make a mistake, causing large indexes to be created very slowly, or not at all. If you run into this situation, you may want to consider setting the Index Create Memory setting yourself, although you will have to trial and error the setting until you find the optimum one for your situation. Legal settings for this option run from 704 to 2147483647. This number refers to the amount of RAM, in KB, that SQL Server can devote to creating the index.
Keep in mind that if you do change the setting, that this memory will then be allocated for index creation and will not be available for other use. If your server has more than enough RAM, then this will be no problem. But if your server is short on RAM, changing this setting could negatively affect the performance of other aspects of SQL Server. You might consider making this change only when you are creating or rebuilding large indexes, and return the setting to the default all other times.
As with the other settings, if you find in your audit that this setting is some other value other than the default, try to find out why. If you can't find out why, or if there is not a good reason, change it back to the default value.
SQL Server 7.0 and 2000, by default, run in what is called "thread mode." What this means is that SQL Server uses what are called UMS (User Mode Schedulers) threads to run user processes. SQL Server will create one UMS thread per processor, with each one taking turns running the many user processes found on a busy SQL Server. For optimum efficiency, the UMS attempts to balance the number of user processes run by each thread, which in effect tries to evenly balance all of the user processes over all the CPUs in the server.
SQL Server also has an optional mode it can run in, called fiber mode. In this case, SQL Server uses one thread per processor (like thread mode), but the difference is that multiple fibers are run within each thread. Fibers are used to assume the identity of the thread they are executing and are non-preemptive to other SQL Server threads running on the server.
Think of a fiber as a "lightweight thread," which, under certain circumstances, takes less overhead than standard UMS threads to manage. Fiber mode is turned on and off using the "lightweight pooling" SQL Server configuration option. The default value is "0", which means that fiber mode is turned off.
So what does all this mean? Like everything, there are pros and cons to running in one mode over another. Generally speaking, fiber mode is only beneficial when all of the following conditions exist:
Two or CPUs are found on the server (the more the CPUs, the larger the benefit).
All of the CPUS are running near maximum (95-100%) most of the time.
There is a lot of context switching occurring on the server (as reported by the Performance Monitor System Object: Context Switches/sec. Generally speaking, more than 5,000 context switches per second is considered high.
The server is making little or no use of distributed queries or extended stored procedures.
If all the above are true, then turning on "lightweight pooling" option in SQL Server may may see a 5% or greater boost in performance.
But if the four circumstances are all not true, then turning on "lightweight pooling" could actually degrade performance. For example, if your server makes use of many distributed queries or extended stored procedures, then turning on "lightweight pooling" will definitely cause a problem because they cannot make use of fibers, which means that SQL Server will have to switch back-and-forth from fiber mode to thread mode as needed, which hurts SQL Server's performance.
As with the other settings, if you find in your audit that this setting is some value other than the default, try to find out why. In addition, check to see if the four conditions above exist. If they do, then turning "lightweight pooling" on may be beneficial. If these four conditions do not exist, then use the default value of "0".
Each time SQL Server locks a record, the lock must be stored in memory. By default, the value for the "locks" option is "0", which means that lock memory is dynamically managed by SQL Server. Internally, SQL Server can reserve from 2% to 40% of available memory for locks. In addition, if SQL Server determines that allocating additional memory for locking could cause paging at the operating system level, it will not allocate the memory to locks, instead giving it up to the operating system in order to prevent paging.
In almost all cases, you should allow SQL Server to dynamically manage locks, leaving the default value as it. If you enter your own value for lock memory (legal values are from 5000 to 2147483647 KB), then SQL Server cannot dynamically manage this portion of memory, which could cause some other areas of SQL Server to experience poor performance.
If you get an error message that says you have exceeded the maximum number of locks available, you have these options:
Closely examine your queries to see if they are causing excessive locking. If they are, it is possible that performance is also being hurt because of a lack of concurrency in your application. It is better to fix bad queries than it is to allocate too much memory to tracking locks.
Reduce the number of applications running on the server.
Add more RAM to your server.
Boost the number of locks to a high value (based on trial and error). This is the least desirable option as giving memory to locks prevents it from being used by SQL Server for more beneficial purposes.
Do your best to resist using this option. If you find in your audit that this setting is some other value other than the default, find out why. If you can't find out why, or if the reason is poor, change it back to the default value.
Max Degree of Parallelism
This option allows you to specify if parallelism is turned on, turned off, or only turned on for some CPUs, but not for all CPUs in your server. Parallelism refers to the ability of the Query Optimizer to use more than a single CPU to execute a query. By default, parallelism is turned on and can use as many CPUs as there are in the server (unless this has been reduced due to the affinity mask option). If your server has only one CPU, the "max degree of parallelism" value is ignored.
The default for this option is "0", which means that parallelism is turned on for all available CPUs. If you change this setting to "1", then parallelism is turned off for all CPUs. This option allows you to specify how many CPUs can be used for parallelism. For example, if your server has 8 CPUs and you only want parallelism to run on 4 of them, you can specify a value of 4 for this option. Although this option is available, it is doubtful if using it would really provide any performance benefits.
If parallelism is turned on, as it is by default if you have multiple CPUs, then the query optimizer will evaluate each query for the possibility of using parallelism, which takes a little overhead. On many OLTP servers, the nature of the queries being run often doesn't lend itself to using parallelism for running queries.
Examples of this include standard SELECT, INSERT, UPDATE and DELETE statements. Because of this, the query optimizer is wasting its time evaluating each query to see if it can take advantage of parallelism. If you know that if your queries will probably never need the advantage of parallelism, you can save a little overhead by turning this feature off, so queries aren't evaluated for this.
Of course, if the nature of the queries that are run on your SQL Server can take advantage of parallelism, you will not want to turn parallelism off. For example, if your OLTP server runs many correlated subqueries, or other complex queries, then you will probably want to leave parallelism on. You will want to test this setting to see if making this particular change will help, or hurt, your SQL Server's performance in your unique operating environment.
In most cases, because most servers run both OLTP and OLAP queries, parallelism should be kept on. As part of your performance audit, if you find parallelism turned off, or if it is restricted, find out why. As part of your audit, you will also want to determine if the server is virtually all OLTP-oriented. If so, the turning off parallelism might be justified, although you will want to thoroughly test this to see if turning it off helps or hurts overall SQL Server performance. But if the server runs mixed OLTP and OLAP, or mostly OLAP queries, then parallelism should be on for best overall performance.
Max Server Memory (MB) & Min Server Memory (MB)
For best SQL Server performance, you want to dedicate your SQL Servers to only running SQL Server, not other applications. And in most cases, the settings for the "maximum server memory" and the "minimum server memory" should be left to their default values. This is because the default values allow SQL Server to dynamically allocate memory in the server for the best overall optimum performance. If you "hard code" a minimum or maximum memory setting, you risk hurting SQL Server's performance.
On the other hand, if SQL Server cannot be dedicated to its own physical server (other applications run on the same physical server along with SQL Server) you might want to consider changing either the minimum or maximum memory values, although this is generally not required.
Let's take a closer look at each of these two settings.
The "maximum server memory" setting, when set to the default value of 2147483647 (in MB), tells SQL Server to manage the use of memory dynamically, and if it needs it, to use as much RAM as is available (while leaving some memory for the operating system).
If you want SQL Server to not use all of the available RAM in the server, you can manually set the maximum amount of memory SQL Server can use by specifying a specific number that is between 4 (the lowest number you can enter) to the maximum amount of RAM in your server (but don't allocate all the RAM in your server, as the operating system needs some RAM too).
When "maximum server memory" is set to the default value, as mentioned before, memory use is adjusted dynamically. What this also means is that if you are running other applications other than SQL Server on a physical server, that SQL Server will "play nice" and give up some of its memory if other applications need the use of some.
So in most cases, there is no reason to change this setting from its default value. Only in rare occasions when SQL Server doesn't appear to "play nice," or when you want to artificially keep SQL Server from using all of the RAM available to it, would you want to change the default value. For example, if your "other" application(s) are more important than SQL Server's performance, then you can restrain SQL Server's performance if you want.
There are also two potentially performance issues you can create if you do attempt to set the "maximum server memory" setting manually. First, if you allocate too much memory to SQL Server, and not enough for the operating system, then the operating system may have no choice but to begin excessive paging, which will slow performance of your server. Also, if you are using the Full-Text Search service, you must also leave plenty of memory for its use. Its memory is not dynamically allocated like the rest of SQL Server's memory, and there must be enough available memory for it to run properly.
The "min server memory" setting, when set to the default value of 0 (in MB), tells SQL Server to manage the use of memory dynamically. This means that SQL Server will start allocating memory as is needed, and the minimum amount of RAM used can vary as SQL Server's needs vary.
If you change the "min server memory" setting to a value other than the default value of 0, what this means is not that SQL Server will automatically begin using this amount of minimum memory automatically, as many people assume, but that once the minimum amount is reached (because it is needed) that the minimum amount specified will never go down below the specified minimum.
For example, if you specify a minimum value of 100 MB, then restart SQL Server, SQL Server will not immediately reserve 100 MB of RAM for its minimal use. Instead, SQL Server will only take as much as it needs. If it never needs 100MB, then it will never be reserved. But if SQL Server does exceed the 100 MB amount specified, then later it doesn't need it, then this 100 MB will then become the bottom limit of how much memory SQL Server allocates. Because of this behavior, there is little reason to change the "min server memory" setting to any value other than its default value.
If your SQL Server is dedicated, there is no reason to use the "min server memory" setting at all. If you are running other applications on the same server as SQL Server, there might be a very small benefit of changing this setting to a minimum figure, but it would be hard to determine what this value should be, and the overall performance benefit would be negligible.
If you find in your audit that these settings are some other value other than the default, find out why. If you can't find out why, or if the reason is poor, change them back to their default values.
Max Text Repl Size
The "max text repl size" setting is used to specify the maximum size of text or image data that can be inserted into a replicated column in a single physical INSERT, UPDATE, WRITETEXT, or UPDATETEXT transaction. If you don't use replication, or if you don't replicate text or image data, then this setting should not be changed.
The default value is 65536, the minimum value is 0, and the maximum value is 2147483647 (in bytes). If you do heavy replication of text or image data, you might want to consider increasing this value only if the size of this data exceeds 64K. But as with most of these settings, you will have to experiment with various values to see what works best for your particular circumstances.
As part of your audit, if you don't use replication, the the only correct value here is the default value. If the default value has been changed, you need to investigate if text or image data is being replicated. If not, or if the data is less than 64K, then change it back to the default value.
Max Worker Threads
The "max worker threads" SQL Server configuration setting is used to determine how many worker threads are made available to the sqlservr.exe process from the operating system. The default value is 255 worker threads for this setting. SQL Server itself uses some threads, but they will be ignored for this discussion. The focus here is on threads created for the benefit of users.
If there are more than 255 user connections, then SQL Server will use thread pooling, where more than one user connection shares a single worker thread. Although thread pooling reduces the amount of system resources used by SQL Server, it can also increase contention among the user connections for access to SQL Server, hurting performance.
To find out how many worker threads your SQL Server is using, check the number of connections that are currently made to your server using Enterprise Manager. For each SQL Server connection, there is one worker thread created, up to the total number of worker threads that are specified in the "max worker threads" settings. For example, if there are 100 connections, then 100 worker threads would be employed. But if there are 500 connections, but only 255 worker threads are available, then only 255 worker threads are being used, with connections sharing the limited worker threads.
Assuming there is enough RAM in your server, for best performance, you will want to set the "max worker threads" setting to a value equal to the maximum number of user connections your server ever experiences, plus 5. But there are some limitations to this general recommendation, as we will soon see.
As has already been mentioned, the default value for the "max worker threads" is 255. If your server will never experience over 255 connections, then this setting should not be changed from its default value. This is because worker threads are only created when needed. If there are only 50 connections to the server, there will only be that many worker threads, not 255 (the default value).
If you generally have over 255 connections to your server, and if "max worker threads" is set to the default value of 255, what will happen is that SQL will begin thread pooling. This means that a single thread will be responsible for more than one connection. Now comes the dilemma.
If you increase the "max worker threads" so that there is one thread for each connection, SQL Server will take up additional resources (mostly memory). If you have plenty of RAM in your server that is not being used by SQL Server or any other application, then boosting the "max worker threads" can help boost the performance of SQL Server.
But if you don't have any extra RAM available, then adding more worker threads can hurt SQL Server's performance. In this case, allowing SQL Server to use thread pooling offers better performance. This is because thread pooling uses less resources than does not using it. But, on the downside, thread pooling can introduce problems of resource contention between connections.
For example, two connections sharing a thread can conflict when both connections want to perform some task as the exact same time (which can't be done because a single thread can only service a single connection at the same time).
So what do you do? In brief, if your server normally has less than 255 connections, leave this setting at its default value. If your server has more than 255 connections, and if you have extra RAM, then consider bumping up the "max worker threads" setting to the number of connections plus 5. But if you don't have any extra RAM, then leave the setting to its default value. For SQL Server with thousands of connections, you will have to experiment to find that fine line between extra resources used by additional worker threads and contention between connections all fighting for the same worker thread.
As you might expect, before using this setting in production, you will want to test your server's performance before and after the change to see if SQL Server benefited, or was hurt, from the change.
As part of your audit, follow the advice just given above for how to set this setting.
Min Memory Per Query
When a query runs, SQL Server does its best to allocate the optimum amount of memory for it to run efficiently and quickly. By default, the "minimum memory per query" setting allocates 1024 KB, as a minimum, for each query to run. The "minimum memory per query" setting can be set from 0 to 2147483647 KB.
If a query needs more memory to run efficiently, and if it is available, then SQL Server automatically assign more memory to the query. Because of this, changing the value of the "minimum memory per query" default setting is generally not advised.
In some cases, if your SQL Server has more RAM than it needs to run efficiently, the performance of some queries can be boosted if you increase the "minimum memory per query" setting to a higher value, such as 2048 KB, or perhaps a little higher. As long as there is "excess" memory available in the server (essentially, RAM that is not being used by SQL Server), then boosting this setting can help overall SQL Server performance. But if there is no excess memory available, increasing the amount of memory for this setting is more likely to hurt overall performance, not help it.
This configuration option does affect performance, but not in the conventional way. By default, the "nested triggers" option is set to the default value of "1". This means that nested triggers (a nested trigger is a trigger that cascades up to a maximum limit of 32) can be run. If you change this setting to "0", then nested triggers are not permitted. Obviously, by not allowing nested triggers, overall performance can be improved, but at the cost of application flexibility.
This setting should be left to its default value, unless you want to prevent developers from using nested triggers.
Network Packet Size (B)
"Network packet size" determines the size of the packet size SQL Server uses when it talks to clients over a network. The default value is 4096 bytes, with a legal range from 512 bytes to a maximum value which is based on the maximum amount of data that the network protocol you are using supports.
In theory, by changing this value, performance can be boosted if the size of the packet more or less matches the size of the data in the packet. For example, if the data is small, less than 512 bytes on average, changing the default value of 4096 bytes to 512 bytes can boost performance. Or, if you are doing a lot of data movement, such as with bulk loads, of if you deal with a lot of TEXT or IMAGE data, then by increasing the default packet size to a number larger than 4096 bytes, then it will take fewer packets to send the data, resulting in less overhead and better performance.
In theory, this sounds great. In reality, you will see little, if any, performance boost. This is because there is no such think as an average data size. In some cases data is small, and in other cases, data is very large. Because of this, changing the default value of the "network packet size" is generally not very useful.
As a part of your audit, carefully question any value for this setting other than the default. If you can't get a good answer, change it back.
"Open objects" refers to the total number of objects (such as tables, views, rules, defaults, triggers, and stored procedures) that can be open at the same time in SQL Server. The default setting for this option, which is "0", tells SQL Server to dynamically increase or decrease this number in order to obtain the best overall performance of the server.
In rare cases, generally when server memory is fully used, it is possible to get a message telling you that you have exceeded the number of open objects available. The best solution to this is to increase the server's memory, or to reduce the load on the server, such as reducing the number of databases maintained on the server.
If neither of the above options are practical, you can manually configure the maximum number of available open objects by setting the "open objects" value to an appropriately high enough setting. The problem with this is twofold. First, determining the proper value will take much trial and error. Second, any memory allocated to open objects will be taken away from other SQL Server needs, hurting the server's overall performance. Sure, now your application will run when you change this setting, but it will run slower. Avoid changing this setting.
As you are performing your audit, if you find any setting other than "0", either someone made a mistake and it needs to be corrected, the server's hardware is too small and more RAM needs to be added to it, or some of this server's work needs to be moved to another, less busy, server.
By default, the SQL Server processes run at the same priority as any other applications on a server. In other words, no single application process has a higher priority than another when it comes to getting and receiving CPU cycles.
The "priority boost" configuration option allows you to change this. The default value for this option is "0", means that the priority of SQL Server processes is the same as all other application processes. If you change it to "1", then SQL Server now has a higher priority than other application processes. In essence, this means that SQL Server has first priority to CPU cycles over other application processes running on the same server. But does this really boost performance of SQL Server?
Let's look at a couple of scenarios. First, let's assume our server runs not only SQL Server, but other apps (not recommended for best performance, but a real-world possibility), and that there is plenty of CPU power available. If this is the case, and if you give SQL Server a priority boost, what happens? No much. If there is plenty of CPU power available, a priority boost doesn't mean much. Sure, SQL Server might gain a few milliseconds here and there as compared to the other applications, but I doubt if you would be able to notice the difference.
Now let's look at a similar scenario as above, but let's assume that CPU power is virtually all exhausted. If this is the case, and SQL Server is given a priority boost, sure, SQL Server will now get its work done faster, but only at the cost of slowing down the other applications. If this is what you want, OK. But a better solution would be to boost CPU power on the server, or reduce the server's load.
But what if SQL Server is running on a dedicated server with no other applications and if there is plenty of excess CPU power available? In this case, boosting the priority will not gain a thing, as there is nothing competing (other than part of the operating system) for CPU cycles, and besides, there are plenty of extra cycles to go around.
And last of all, if SQL Server is on a dedicated server, and the CPU is maxed out, giving it a priority boost is a zero sum game as parts of the operating system could potentially be negatively affected if you do. And the gain, if any, will be very little for SQL Server.
As you can see, this option is not worth the effort. In fact, Microsoft has documented several problems related to using this option, which makes this option even less desirable to try.
If you find this option turned on in your audit, question its purpose. If you currently are not having any problems with it on, you can probably leave it on without issues. But I would recommend setting it back to is default.
Query Governor Cost Limit
The "query governor cost limit" option allows you to limit the maximum length a query can run, and is one of the few SQL Server configuration options that I endorse. For example, let's say that some of the users of your server like to run very long-running queries that really hurt the performance of your server. By setting this option, you could prevent them from running any queries that exceeded, say 300 seconds (or whatever number you pick). The default value for this setting is "0", which means that there are no limits to how long a query can run.
The value you set for this option is approximate, and is based on how long the Query Optimizer estimates the query will run. If the estimate is more than the time you have specified, the query won't run at all, producing an error instead. This can save a lot of valuable server resources.
On the other hand, users can get real unhappy with you if they can't run the queries then have to run in order to do their job. What you might consider doing is helping those users to write more efficient queries. That way, everyone will be happy.
Unlike most of my other suggestions, if your audit turns up a value here other than "0", great. As long as users aren't complaining, this is a good deal. In fact, if this setting is set to "0", consider adding a value here and see what happens. Just don't make it too small. You might consider starting with value of about 600 seconds and see what happens. If that is OK, then try 500 seconds, and so on, until you find out when users start complaining, then you can back off.
Query Wait (s)
If SQL Server is very busy and is hurting for memory resources, it will queue what it considers memory-intensive queries (those that use sorting or hashing) until there is enough memory available to run them. In some cases, there just isn't enough memory to run them and they eventually time out, producing an error message. By default, a query will time out after a period of time equal to 25 times the estimated amount of time the Query Optimizer thinks it will take for the query to run.
The best solution for such a problem is to add more memory to the server, or to reduce its load. But if that can't be done, one option, although fraught with problems of its own, is to use the "query wait" configuration option. The default setting for this option is "-1", which waits the time period described above, and then causes the query to time out. If you want the time out period to be greater so that queries won't time out, you can set the "query wait" time to a large enough number. As you might guess, you will have to determine this time out number yourself through trial and error.
The problem with using this option is that a transaction with the intensive query may be holding locks, which can cause deadlock or other locking contention problems, which in the end may a bigger problem than the query timing out. Because of this, this option is not recommended to be changed.
If you find a non-default value in your audit, find out why. If there is no good reason to keep it, change it back to the default value. But, if someone has thought this out thoroughly, and if you cannot detect any locking issues, then consider leaving this option as is.
Recovery Interval (min)
If you have a very active OLTP server application with many INSERTS, UPDATES, and DELETES, it is possible that the default "recovery interval " of 0 (which means that SQL Server determines the appropriate recovery interval) may not be appropriate. If you are watching the performance of your server with the Performance Monitor and notice that you have regular periods of 100% disk-write activity (occurring during the checkpoint process), you may want to set the "recovery interval" to a higher number, such as 5or 10. This figure refers to the maximum number of minutes it will take SQL Server to perform a recovery after it is restarted. The default figure of 0, in effect, works to a maximum recovery period of about 1 minute.
Another potential reason to use this "recovery interval" option is if the server is devoted to OLAP or a data warehouse. In these instances, these mostly read-only databases don't generally benefit from a short recovery interval.
If your server does not match any of the above suggestions, then leaving the default value it generally the best choice.
By extending the checkpoint time, you reduce the number of times SQL Server performs a checkpoint, and if effect, reduce some of SQL Server's overhead. You may need to experiment with this figure in order to find the ideal compromise between performance and the time it takes for SQL Server to perform a recovery.
Ideally, you want to keep this number as small as possible in order to reduce the amount of time it takes to restart the mssqlserver service the next time it happens.This is because each time the mssqlserver service starts, it goes through an automatic recovery process, and the larger the "recovery interval" is set, the longer the recover process will take. You must decide what is the best compromise in performance and recovery time that best fits your needs.
As a part of your audit, you will want to evaluate the current setting for "recovery interval" in regards to its potential use. For busy OLTP servers, you will want to do a lot of research before you decide to increase the "recover interval" to see if it will help or not. Testing is important. But if your server is a dedicated OLAP or data warehouse server, increasing the "recovery interval" is an easy decision to make.
Scan for Startup Procs
SQL Server has the ability, if properly configured, to look for stored procedures to run automatically when the mssqlserver service starts. This can be handy if you want a particular action to occur on startup, such as the loading of a specific stored procedure into cache so that it is already there when users begin accessing the server.
By default, the "scan for startup procs" is set to "0", which means that a scan for stored procedures is not done at startup. If you don't have any startup stored procedures, then this is the obvious setting. There is no point spending resources looking for stored procedures that don't exist.
But if you do have one or more stored procedures you want to execute on server startup, then this option has to be set to "1", which turns on the startup scan.
If you find in your audit that this is set to "1", check to see if there are any start-up stored procedures. If not, then return this option back to the default setting.
Set Working Set Size
The "set working set size" option is used when you want to fix the minimum and maximum sizes of the amount of memory that is to be used by SQL Server when it starts. This option also prevents any page swapping.
By default, this setting is set to "0", which means that this option is not used. To turn on this option, it must be set to "1", plus , the minimum server memory size and the maximum memory sizes must be set to the same value. This is the value used to reserve the working set size.
As with most options, this one should not generally be necessary. The only time you might want to consider it is if the server is dedicated to SQL Server, has a very heavy load, and has sufficient memory available. Even then, any performance boost gained will be minimal, and you risk the potential of not leaving enough memory to the operating system. Testing is key to the successful use of this option.
If this option is set to a value other than the default, check also to see if the minimum server memory and the maximum server memory settings are set to the same value, otherwise this option will not work correctly. If the conditions above exit, and if thorough testing has been done, then consider leaving this setting. Otherwise, change it back to the default (don't forget to change back all three related settings).
By default, SQL Server only allocates as many user connections as it needs. This allows those who need to connect to connect, while at the same time minimizing the amount of memory used. When the "user connections" setting is set to its default value of "0", user connections are dynamically set. Under virtually all circumstances, this is the ideal setting.
If you change the default value for "user connections," what you are telling SQL Server to do is to allocate only the number of user connections you have specified, no more or no less. Also, it will allocate memory for every user connection specified, whether or not it is being used. Because of these problems, and because SQL Server can perform this task automatically and efficiently, there is no reason to change this setting from the default.
If your audit shows a value other than "0", change it back to zero. Don't even both asking why.
Your goal should be to perform this part of the performance audit, described on this page, for each of your SQL Servers, and then use this information to make changes as appropriate, assuming you can.
Once you have completed this part of the performance audit, you are now ready to audit your SQL Server database configurations.