Don’t go Cheap on Memory!!

 

If you have SQL Server enterprise edition (expensive version ), Do not go Cheap on RAM.  If your databases are larger than 1TB, I would highly recommend starting with 256GB . The best way to to justify your Memory needs start with a baseline of the environment.  Counters to begin monitoring would include:

  • SQL Server:Buffer Manager\Page Life Expectancy
  • SQL Server:Buffer Manager\Page reads/sec
  • Physical Disk\Disk Reads/sec
  • Physical Disk\Avg.Disk sec/Read
  • SQLServer:SQL Statistics\Batch Requests/sec

Terminology of the above Counters:

  1. SQL Server:Buffer Manager\Page Life Expectancy-Page Life Expectancy is the number of seconds the average page of data has been in the buffer pool.  Keeping the data in memory gives SQL Server quicker access to it instead of making the long, slow trip to disk.
  2. SQL Server:Buffer Manager\Page reads/sec-Indicates the number of physical database page writes that are issued per second.
  3. Physical Disk\Disk Reads/sec-is the rate of read operations on the disk
  4. Physical Disk\Avg.Disk sec/Read-Measures read latency on the disks
  5. SQLServer:SQL Statistics\Batch Requests/sec
  6. SQL Server’s Batch Requests represents the number of SQL Statements that are being executed per second. In my opinion this is the one metric that gives the best baseline on how your server is performing.

 

One you run your Baseline for about a week during your core hours, you will get a better understanding how much RAM you need. For standard edition SQL Server 2012 just get 64GB and run away, Do not get less.

Benefits of RAM

  1. Add more RAM. Easy and not very expensive nowadays. Doesn’t really solve the problem, but can be an acceptable short-term solution. How much RAM? Enough to hold the active portion of your databases. Only you can tell how much that is. Not all versions/editions of SQL Server and Windows can use the same amount of RAM, so make sure you’re not limited by that.
  2. Memory is a lot faster then disk…..

 

Don’t forget to set your Max Server Memory:

The max server memory option sets the maximum amount of memory that the SQL instance can utilize. It is generally used if there are multiple apps running at the same time as SQL and you want to guarantee that these apps have sufficient memory to function properly.

Some apps will only use whatever memory is available when they start and do not request more even if needed. That is where the max server memory setting comes into play.

On a SQL cluster / farm for example, several SQL instances could be competing for resources. Setting a memory limit for each SQL instance so that the different SQL instances are not duking it out over RAM will guarantee best performance.

Don’t forget to leave at least 4-6GB of RAM for the operating system to avoid performance issues.

 

Some other ways to improve performance:

  1. Purge some data. Are you sure you need all the data you have in there?
  2. Create smaller indexes. That means including in your indexes the minimum set of columns that cover your queries. Big scans work faster on smaller indexes and require less RAM. The tradeoff is more space used on disk and more operations to perform when updating the table.
  3. Create filtered indexes. If you are often filtering on a common non-selective condition (eg: active=true or something similar) a filtered index might help reducing the size of the indexes. This has some impact on the application as well (some SET options restrictions to make it work).
  4. Use compression. Data is compressed both on disk and in memory, so ROW/PAGE compression can be a way to reduce memory comsumption. You will need Enterprise Edition and you have to give up some CPU, but it’s often worth it for huge objects.
  5. Use the correct data types: using (n)char over (n)varchar or int when tinyint is enough will waste not only disk space, but also buffer pool space. This includes character and numeric data types: make sure you’re using the right type/size/precision/scale for your data columns.

 

More Resources/Blogs

  1. Paul Randal has an excellent blog post on Page Life Expectancy
  2. Great blog on Batch Request/sec