Rambling thoughts

All About My thoughts…..

My Experiments with SQL 2014 July 27, 2013

Filed under: SQl Server,SQL Server 2014 — Aditya Badramraju @ 7:47 am

Let me share my experiences on the first look of this product

Installation

  • As all of us already know we can’t have this CTP where old versions are already sitting, I too had it in new VM with WS 2012 which have .net 4.0 installed
  • But I don’t think SQL Server cannot survive without .net 3.5 framework, so I had to install and then there were no other identified problems

Installation is pretty similar as 2012 ones, no major changes.

Hekaton:

First thing I tried immediately after installing SQL Server was IN-Memory OLTP , Before I elaborate more let me thank Kalen Delaney for her 1st white paper on In-Memory OLTP from where many things were uncovered

Let’s get going

As we all already know about what the technology is , here is what I have

I first created a database which is In- memory aware.. Remember we need to add separate file group which is memory optimized aware to the existing database or while creating a new database.

CREATE DATABASE HEKATON

ON PRIMARY (Name =HEKATON_DATA,

FILENAME=’C:\MSSQL\hetaon_data.mdf’ ,size=500mb),

filegroup memoryoptimised contains MEMORY_OPTIMIZED_DATA

(Name=inmemory_l,

filename= ‘C: \MSSQL\hekaton_dir’ )

(Name=inmemory_2,

filename= ‘C: \MSSQL\hekaton_dir2′)

log on(name=hekaton_log,filename=’C:\MSSQL\hekaton_log.ldf’,size=500MB)

collate latinl_general_100_bin2

If you observe we haven’t specified .ndf at the end of secondary file in the file group memoryoptimised and the reason is SQL Server stores Memory Optimized data in File Streams rather than normal SQL Server files, we will discuss the reason why it stores in this way.

Next- let me create an Inmemory table

CREATE table inmemoryl

(id int not null primary key nonclustered hash with ( bucket_count=1024),

name varchar(30) null,

city varchar(20) null,

lastmodified datetime not null,

) with memory_optimized=on,durability=schema_and_data

Following are the points of interest from above Create SCRIPT

  • We should specify whether the table is of INMEMORY or not by Memory_optimized option
  • In memory supports two types of durability
    • Schema Only : After a restart or Crash of the system we  will have only SCHEMA of the tables we created in memory
    • Schema and Data: Here Hekaton engine retains both Data of course with Schema
    • We are subjected to create only Non-clustered Index in INMemory (bucket count and why only NCI we will have it in separate blog post)

Let’s input some junk data in to table

declare @i int

set @i=l000000

while (l<@i)

begin

insert into inmemoryl values(@i+1, ‘Aditya’, ‘Hyderabad’ ,getdate())

set @i=@i-l

end

To Compare with Native tables let us create one native table with same syntax and same data

CREATE table outmemoryl

(id int not null primary key nonclustered,

name varchar(30) null,

city varchar(20) null,

lastmodified datetime not null,

)

I am executing following simple queries and let us see what will be the plans look like

select * from inmemoryl

inmemoryqp

select * from outmemoryl

outmemoryqp

  • We can see that Inmemory tables have done an Index Scan and outmemory is atable scan
  • We know why Outmemory does and the reason is because of non presence of clustered index
  • But Inmemory table does this scan because the main data is stored in Index pages itself.

Also I took the output of exec_requests while executing above statements to prove that physical reads and writes from Inmemory are 0 while out memory would be obviously some value.

 Output for Outmemory1 select

select writes,reads,logical_reads,* from sys.dm_exec_requests where session_id=52

outmemoryselect

Output for inmemory1 select

inmemoryselect

 

Yayyy!! We now have data inmemory… So what if I join two inmemory tables would I be having any physical reads… let us take a look

select * from inmemoryl i

inner join inmemory2 i2

on i.id=i2.id

joinexec

Ooops!! I have huge reads!!! And when I went to resource monitor I see tempdb.mdf having more IOPS..

taskamanger

Yes.. The reason is Worktable… No matter what kind of technology I use, if your query went wrong, your performance ought to be wrong.

For this Blogpost we will do a last test..

While this is all happening I want to restart the SQL Server and see how it behaves (remember I am not doing any writes or making buffers dirty anywhere)

So I issued

shutdown with nowait

And started SQL Server …. Now When I am trying to see my object explorer I end up in seeing

error1

And after some time I got hold of SQL Server and this is what I saw in database list and errorlog

errorlog

recovery

From this We can say that the Inmemory data that is loaded in to buffer pool is loaded at the point of startup/recovery

I will sign off here and in next blog post next week, we will discuss

  • Checkpoint process
  • Index storage
  • Bufferpool extension

Thank you so much for reading this blog and let me know your suggestions @ adityafeb22@gmail.com and your comments below :)… Happy Learning

 

He Crashed and Failed Me Over :).. Intresting Scenario on Cluster Failovers July 21, 2013

Filed under: Uncategorized — Aditya Badramraju @ 8:27 am
Tags: , , ,

It’s been quite a while I visited this place…But never too late :)… Today let’s go techie…

MY CLUSTER IS FAILING OVER FREQUENTLY… What we will actually do

  • Check Cluster Events
  • Check Event Viewer
  • Also we will try to generate Cluster log and we see the root cause why this has happened

                AND there comes the CRAZY Error

                Cluster resource ‘FileServer-’ (resource type ”, DLL ‘clusres.dll’) either crashed or deadlocked. The Resource Hosting Subsystem (RHS) process will now attempt to terminate, and the resource will be marked to run in a separate monitor.

Let Me Debunk Some words from the above error

RHS: Resource Host Subsystem. This is the place where all the resources of cluster are resided, when I say resources they are

  • DISKS
  • IP
  • NAME
  • FILE Shares
  • Services hosted on Cluster etc..

Resource control manager checks all the resources in this subsystem in regular interval using different DLL’s such as one mentioned in above error clusres.dll

So resource control manager ensures the health of the resources hosted in rhs.exe and any health check failure of any resource makes RCM to trigger a restart or crash of RHS and all the resources hosted in one rhs.exe were subjected to go down and come up which triggers the fail over or restart of important resources thus we end up in seeing many fail overs or restarts of our SQL instances. When the restart is done RCM intelligently keep the problematic resource which is actually responsible for crash to run in separate RHS to ensure there would be no problems and ALL IS WELL for future J

As we tried to explain what Error is.. let me try to explain the possible solution that could be to avoid any restarts for the failure of sheer file shares. (There is this habit of windows for a failure of health check of file shares and bringing down the instance as previous error)

Solution:

If you can’t address the Problem Avoid the Problem J

What we are going to do address this problem is running the problematic resources like file shares in separate monitor in beforehand such that we avoid the restart of all the other resources except FILE SHARES in case of any health check failure.

 

 

And we can do this by two ways either by simple POWERSHELL commands

Import-Module Failoverclusters                      – Importing fail over to powershell

(Get-ClusterResource “rr”).SeparateMonitor  — Check the status of that particular resource..here RR is fileshare name

(Get-ClusterResource “rr”).SeparateMonitor=1 — Setting it to run in a separate Monitor

(Get-ClusterResource “rr”).SeparateMonitor  –Recheck the status

 

Please drop me an email @ adityafeb22@gmail.com or please do comment for any more information should I provide.

               

 

Reinventing ME(Restoring!!!!) June 26, 2012

Filed under: Uncategorized — Aditya Badramraju @ 2:15 pm

Offlate I find hard to blog and I find my mind so hard to think about some qualitative stuff(this is called the DUMBNESS)

But a hell lot of things hapeening in and around me and thought of using this situation to kick start my blogging skills again.

Changing the Company

I am now blogging this from a reputed financial company S&P Capital IQ as i did quit the TCS 1 week ago. The decision I made was so harsh to me and my company.

As all other IT companies my previous employer tried to stop me but i just called it off and doesnt know the value I will pay for it.

But the most painful thing for me in leaving TCS is TCS’ers(This is the way to call the EMPLOYEES of TCS). Its really became so hard for me to digest the fact of leaving them, this happened the same when I am joining in TCS as well.

I owe to all my dear freinds a big time for making me better.

With that note I joined S&P leaving all fun,carrer and challenges behind on 21st June 2012.

Unlike my previous employer the formalities of joining and getting in to project were like cake walk (Thank god!!! no inductions, no bunch of signatures)

During first two days when I am in the  floor and started browsing on the machine I felt like I am alone here , this is due to the fact that people around me stick to either computers or go down for TT matches.

MR.Aditya!!! Welcome to Financial Product Company  :(

Here we have a bunch of talented SQL Server DBA’s( previously I Was the only DBA) and talented Technologists(Thanks to IIT and NIT,IIT or what ever)

Lots to learn and lots to swim in this SQL world….The only thing I never ever get bohred Off :)

As of Now Fingers Crossed anmd just hoping for the BEST to COME :)

 

MS TechedIn- Dream Big March 26, 2012

Filed under: Uncategorized — Aditya Badramraju @ 5:40 am
Tags: , ,

I have dreamt in my earlier carrer days of attending an event where some one speaks about the technology I work and I get Goosebumps while listening to it.

Microsoft and TCS made my dream come true. Lemme tell you and drive you through the experience

1st Day of Teched

I was at registration counter and as I don’t have companions I ended up in listening to ppl who were in queue. I heard ppl talking abt their manager I heard ppl talking abt Microsoft, I heard ppl talking abt girls around. But one thing commonly I heard was the excitement, Excitement to see What Is BIG

I got my goodies packet and saw the passport in it(of course excited about Reebok Watch!!!). Guided by the team Teched I walked down to the stage. AW!!! is my first impression by looking at the stage, I was in an impression whether I walked down to some film fare awards function

With in no time Sanket started his Keynote, he walked through the journey of MS and its success story. He concluded the session by making an oath that we are going BIG.

With that awesome keynote I stepped in to SQL Server Developer track

Highlight of all the sessions was Pinal and Vinod’s session on demystifying many bunks around SQL Server. I was discounted by vinod to benefit the others on the session. Ofcourse I felt great to be recognized :)

When the brain was almost out of memory there comes the reliever with Demo Extravaganza which was enjoyed by one and all thoroughly

In between I had a super chat with MTC architects and I myself debunked many of my own myths.

Day-2

Basically this was not the day for me as I don’t have any track in particular, but the key note by Amit Chatterjee made me awestruck

I dreamt of being successful as him( Am i Dreaming BIG here!!!)

Followed by super key note I entered in to Hands on lab for Power view and Always on, though the former was not as useful to me, but the later was so helpful and i enjoyed to keep up the pace of the speakers.

Later in the afternoon I explored all the Expo showrooms. The thing that attracted me was Surface table and Windows Server 8 itself

I got to learn a lot on and around Win Server 8. I really have to say its gonna change the way the Server functions.

And again this followed with a great Demo Xtravaganza where Pinal rocked the floor. East or West SQL is the Best ;)

Day-3

Some how i was totally affected with sunstroke and I hardly slept on the day before, but as the sessions on this day are totally sql oriented. I gained all the energy and went back to Day-3

First Session by Vinod and Second by Pinal…Both were Awesome.

I basically a fan of SQL internals and I love to discuss the same with my fellow members.

The memory changes and the indirect checkpoints internals in SQL 2012 made my brain restless to go over and work on them.

I thank Vinod for shedding light on these topics

I really don’t know that Parallelism was joined by three other functions. Thanks to Pinal and his energy in dealing and bringing up this together

I thank Pinal to stop by in the middle and recognizing me(Also i need to thank him for giving away me the card)

My energy by then was totally exhausted and I felt really guilty on my luck as i was not able to attend the sessions of my favorite speakers Amit and Balmukund.

I took medicine and travelled back to hotel for calling it as a day.

But when I am here and thinking about the Takeaways here is the list

*Dream Big

*Dream Big……

Yes, Teched brought a great change in my thought process and my the way I design my career.

I am dreaming Big to be MCM on SQL Server

I am dreaming Bigger to be speaker in teched

I am dreaming Big to reimagine myself and discover myself each day.

Thank you very very much TechED and Microsoft. This 3 Days means a lot to me :)

 

The Curious Case of SQL Server Disaster February 19, 2012

Filed under: Uncategorized — Aditya Badramraju @ 3:53 pm

Many people especially in IT industry won’t realize the importance of the DBA and his advices until they see DISASTER of their databases.

Last week I am evident of 3 continuous disasters of databases in 3 successive days and following is my experience with this curious case

First Disaster (Wednesday):

I am yawning in home to see a TV channel forcibly by Grand Ma, at that very moment I got a call from office saying that they are 823 Error Mails flooding in to mailbox.

I immediately got rid of my grand ma and switched on my desktop to see the mails and found that the databases were not accessible.

As I don’t have access from my home I asked my pal to just run the following command to check the status of the databases

SELECT state_desc,* FROM SYS.DATABASES

 

And the output we got is

clip_image002[8]

I then asked to run the following command on which the 823 error is reported

As the database is online in state why still my database is not opening? I know the following table from BOL

State

Definition

ONLINE

Database is available for access. The primary filegroup is online, although the undo phase of recovery may not have been completed.

OFFLINE

Database is unavailable. A database becomes offline by explicit user action and remains offline until additional user action is taken. For example, the database may be taken offline in order to move a file to a new disk. The database is then brought back online after the move has been completed.

RESTORING

One or more files of the primary filegroup are being restored, or one or more secondary files are being restored offline. The database is unavailable.

RECOVERING

Database is being recovered. The recovering process is a transient state; the database will automatically become online if the recovery succeeds. If the recovery fails, the database will become suspect. The database is unavailable.

RECOVERY PENDING

SQL Server has encountered a resource-related error during recovery. The database is not damaged, but files may be missing or system resource limitations may be preventing it from starting. The database is unavailable. Additional action by the user is required to resolve the error and let the recovery process be completed.

SUSPECT

At least the primary filegroup is suspect and may be damaged. The database cannot be recovered during startup of SQL Server. The database is unavailable. Additional action by the user is required to resolve the problem.

EMERGENCY

User has changed the database and set the status to EMERGENCY. The database is in single-user mode and may be repaired or restored. The database is marked READ_ONLY, logging is disabled, and access is limited to members of the sysadmin fixed server role. EMERGENCY is primarily used for troubleshooting purposes. For example, a database marked as suspect can be set to the EMERGENCY state. This could permit the system administrator read-only access to the database. Only members of the sysadmin fixed server role can set a database to the EMERGENCY state.

 

We don’t have any of the above except online. So I then decided to access something and see what actually error is

Alter database adventureworks set online

 

Then I got the error

The operating system returned error 21(The device is not ready.) to SQL Server during a read at offset 0×00000000074000 in file ‘H:\MSSQL\Data\AdventureWorks.mdf’. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

So what is this?????

This is one more state of database that is being INCONSISTENT:

In this case the database isn’t accessible at all, because recovery hasn’t run or completed and so the database is in an inconsistent state. It could just be logically inconsistent (e.g. a transaction modifying data hasn’t recovered) or worse it could structurally inconsistent (e.g. a system transaction modifying index linkages has’t recovered). Either way, SQL Server wants to prevent you from getting into the database because it doesn’t know what state the data and structures in the database are in. But if you don’t have a backup, you need to get into the database, no matter what state things are in.(courtesy : Paul)

The way to get rid of this is to bring the database in to Emergency,Run CheckDB and then if no errors bring the database online

 

Alter database adventureworks

set EMERGENCY

go

Once the database is in Emergency mode then it is only accessible by sysadmin J . Now it’s time to run DBCC CHECKDB

DBCC CHECKDB(‘AdventureWorks’,with_no_info_msgs)

 

Once you find there are no errors then it’s time to bring the database online

 

Alter database adventureworks

set ONLINE

go

 

Second Disaster (Wednesday):

We are happy that databases are online and I documented the issue to client by finding the root cause. After that I am on my way to home when I got the phone  from my pal saying we encountered same situation with another server, then I asked him to take the action as yesterday.

But by then we had a situation where one of our databases went to suspect state and we did approach with the following procedure

Alter database adventureworks

set EMERGENCY

go

 

Alter database adventureworks

set SINGLE_USER

go

 

DBCC CHECKDB(‘AdventureWorks’,with_no_info_msgs)

 

Here we saw bulk of allocation and consistency errors. Then we had two options

1.       Restoring the latest backup

2.       Running database consistency allowing data loss

Unfortunately we have a  SUPER backup team who failed to give us backup immediately. After a long discussion we decided to run database consistency as our client is confident that the data load will be done next day any ways.

 

So we then ran the following steps

 

DBCC CHECKDB (AdventuerWorks, REPAIR_ALLOW_DATA_LOSS) (Check before adapting to your environment)

 

We got the output as the following

 

The log for database ‘AdventureWorks’ has been rebuilt. Transactional consistency has been lost. The RESTORE chain was broken, and the server no longer has context on the previous log files, so you will need to know what they were. You should run DBCC CHECKDB to validate physical consistency. The database has been put in dbo-only mode. When you are ready to make the database available for use, you will need to reset database options and delete any extra log files.

 

Followed by long chain where the database repaired and where it deleted the corrupted database.

We then brought database as multi user and online.

 

Alter database adventureworks

set MULTI_USER

go

 

Alter database adventureworks

set Online

go

Third Disaster (Wednesday):

Thus because of poor backup strategy we ended up in repairing the database with data loss. Fortunately we didn’t see huge data loss.

The disaster happened today is worst where the total system data files disk got corrupted. Again here I am explaining a situation where you should not be(HAVE BACKUPS PLEASE!!!!!)

Here we don’t have backups and somehow the server team managed to get sql server installed and now our task is to attach the production database

We tried to attach the database with single mdf file then we got the error as following

The log cannot be rebuilt because there were open transactions/users when the database was shutdown, no checkpoint occurred to the database, or the database was read-only. This error could occur if the transaction log file was manually deleted or lost due to a hardware or environment failure.

Then what we tried to do is

·         Create a new database with same name

·         Stop SQL Server

·         Swap the MDF Files and delete the log file

·         Start SQL Server

·         Take the database in to Emergency mode

·         Then in to Single User mode

·         Run Database consistency with allow data loss so that the log can be rebuilt forcibly

·         Yayyyyyyyyyy!!! We now have the log file J

·         Bring the database back to multi user and online.

Thus you have the database online.

Above methods are applicable ONLY when

·        You don’t have valid database backups

As an administrator its our first duty to satisfy above constraint but if we have left with no choice we always have Hacking methods J

Happy Troubleshooting J

 

Performance Metric Collection Made Easy December 27, 2011

Filed under: SQl Server — Aditya Badramraju @ 6:02 pm

Many times DBA’s will be in a position where they need to collect the data over a period of time for list of performance counters. Many will depend on 3rd party tools and also scripts

Thanks to Sql Server 2008 R2 where we have a feature of collecting data through System Data Collector set.

The following is the word document which tells the clear story of configuring system data collection

Collection Performance Metrics

Please let me know your feed back

 

Yeah..I AM A Speaker December 24, 2011

Filed under: Uncategorized — Aditya Badramraju @ 10:39 am

After a long wait I am here and I am a Speaker(YEPIEEE)….

The experinece was simply astounding… Talking about something which drives you crazy is always best(may it be a girl or Technolgy)

I love the technology I am working on… and It inspires me to stay late , talk lot…

The same happened today at DEVCON-December in User group hyderabad chapter of Microsoft.

I enjoyed each and every bit of the moment i talked and I really didnt know how 1 hour passed.

I thank all the great audience,Hima,Pranov and Chakri.

I also thank my Gurus Paul Randal,Kimberly,Pinal,Adam Machanic for making me what I am today…

Looking forward for delivering more intersting stuff on SQL Server….

By the way following is the link to download all the Scripts stuff I have talked about ..

SCRIPTS_STUFF

Performance_SQL_SERVER_2008

Its time for MI4…TOm here I come…Merry Christmas and Advance Happy New Year Fellas….

 

 
Follow

Get every new post delivered to your Inbox.

Join 668 other followers