Quantcast
Channel: Microsoft SQL Server Tips & Tricks
Viewing all 21 articles
Browse latest View live

Tampering master.vmp file may result in losing all Analysis Services Databases:

$
0
0

The master.vmp file is the master version map that contains the GUIDS for all of the objects and the version of each object that currently exists on the server.

 

When the server starts, it looks at the master.vmp file and attempts to find each of the objects referenced in the file. If there are objects in the data directory that don't have a corresponding GUID in master.vmp, those objects are deleted. If the master.vmp doesn't exist when the server starts, the server creates a new master.vmp file and deletes all of the objects in the data directory.

 

If you lost the master.vmp file, then everything in the data directory will be deleted from the data directory, and there is no way to recover the database unless you either have a current backup of the database OR have a copy of the database project as it existed on the server.

 

 

Karan Gulati,

Support Engineer, Microsoft SQL Server


Analysis Services Preallocate Memory Setting – Insight

$
0
0

Q: What is Preallocate Setting?

Answers: Preaallocate Setting specifies a certain percentage of memory be allocated to Analysis Services when the service starts (memory preallocation). This configuration setting was introduced because the Microsoft Windows Server 2003 operating system did not scale well with many small virtual memory allocations due to memory fragmentation and inefficiencies in retrieving information already located in memory.



Q: What is generally the best setting for PreAllocate?

To determine this value, monitor the peak value for the Process: Private Bytes counter found in the Performance Monitor tool for the msmdsrv instance. The peak value establishes the maximum value for memory preallocation that you need to set. When you cannot set the preallocation value this high, set it as high as possible without starving other processes and the operating system.

 

If you use memory preallocation with SQL Server 2008 (or SQL Server 2005), use a value that is low enough to ensure that sufficient memory remains for other processes on the computer (avoiding paging) and high enough for Analysis Services (use the peak value for the Process: Private Bytes counter for the msmdsrv instance to establish this value).

Ensure that you will configure TotalMemoryLimit greater than Preallocate else Analysis Server will go in aggressive mode.

If the memory used by Analysis Services is above the value set in the Memory\TotalMemoryLimit property, the cleaner cleans until the memory used by Analysis Services reaches the Memory\TotalMemoryLimit. When the memory used by Analysis Services exceeds the Memory\TotalMemoryLimit, the server goes into an aggressive mode where it cleans everything that it can. If the memory used is mostly non-shrinkable (more information on non-shrinkable memory is included in the next section), and cannot be purged, Analysis Services detects that the cleaner was
unable to clean much. If it is in this aggressive mode of cleaning, it tries to cancel active requests. When this point is reached, you may see poor query performance, out of memory errors in the event log, and slow connection times.

 

Rule of thumb
Value should be lower Memory Low .

Secondly, Other processes on same server shouldn’t starve for memory which means if you have multiple processes running on same server for example SQL Engine or Reporting Services or Integration Services then use a value that is low enough to ensure that sufficient memory remains for other processes on the computer (avoiding paging) and high enough for Analysis Services.


Q: Are there any other memory settings that could conflict with PreAllocate? No.

Q: How do you know if PreAllocate is working?

Test1:
Windows 2003 SP3
Ram 16 GB
SQL 2008 Analysis Services
No activity is going on Analysis Services.

Default Setting:
<Memory>
<TotalMemoryLimit>80</TotalMemoryLimit>
<LowMemoryLimit>65</LowMemoryLimit>
<PreAllocate>0</PreAllocate>

</Memory>
Perfmon Counters (Average Columns)
Memory Limit High 13420851 KB (Around 80% of 16 Gb)
Memory Limit Low 10785914 KB (Around 65% of 16 GB)
Memroy USage Limit 63704 KB (Hardly 637 MB - Actual used by Analysis Services)

Changed Configuration:
<Memory>
<TotalMemoryLimit>80</TotalMemoryLimit>
<LowMemoryLimit>65</LowMemoryLimit>
<PreAllocate>30</PreAllocate>
</Memory>
Perfmon Counters (Average Columns)
Memory Limit High 13420851 KB (Around 80% of 16 Gb)
Memory Limit Low 10785914 KB (Around 65% of 16 GB)
Memory Usage Limit 4194304 KB (Around 30% of 16 GB)

This concludes we are getting right value as we have configured for Preallocate
memory setting.


Additional Ref:
http://sqlcat.com/technicalnotes/archive/2008/07/16/running-microsoft-sql-server-2008-analysis-services-on-windows-server-2008-vs-windows-server-2003-and-memory-preallocation-lessons-learned.aspx

 

 

Karan Gulati
Support Engineer, Microsoft SQL Server

SQL Server Analysis Services Port (SQL 2005 / 2008)

$
0
0

Default Port:2383

You can change default Port for AS in msmdsrv.ini file of the Instance.
Port Used by SQL Server Browser Services for SSAS is 2382

How to determine on which port AS is running?

-Open Task Manager Get the PID for msmdsrv.exe

-Open command Prompt type netstat /ao >>c:\output.txt

-Look for the PID in output file and corresponding TCP IP:Port information for same PID

-To confirm whether you got right Port number, Open Management Studio and connect to AS using IP Address:Port Number (192.168.1.1:5585)

How to change Port for AS Services (2005 & 2008)?

-In Clustered environment, AS can listen only on Port Number 2383.

For standalone Default or Named Instance you can change port number in msmdsrv.ini file (<Port>0</Port>)

Caveat:

SQL Server Bowser is enumerating all SSAS instances and will provide connection information for them.

Imp: Not any default instance information is handled by the SQL Server Browser which means, if you’re using a default instance with an alternate port, you will need to provide it to the client application; SQL Browser will not be able to forward it at connection time.

SQL Server Browser will only provide the connection information for visible SSAS named instance.

How AS Port is Determined in Standalone or Clustered Environment?

What if I have multiple nic card on a box, how exactly SSAS will Listen?

-SSAS as a Standalone Instance

Default instance

SSAS will start listening on all IP addresses of the box using the port specified in the msmdsrv.ini file or the default port (2383) if “0” specified.

Named instance

SSAS will start listening on all IP addresses of the box using the port specified in the msmdsrv.ini file or an available port provided by the system if “0” specified.

-SSAS as a Clustered Instance (Default Instance or Clustered Instance)

SSAS will start listening on all IP addresses of the cluster group using the default port (2383). Any alternate port configuration is ignored.

Imp: In clustered environment AS can listen only on Port Number 2383, manual setting in msmdsrv.ini file will be ignored.

On same box you can’t run standalone and clustered instance:

Related Links:

How to: Configure Windows Firewall for Analysis Services Access
http://msdn.microsoft.com/en-us/library/ms174937.aspx

Managing Multiple Instances of Analysis Services
http://msdn.microsoft.com/en-us/library/ms174906.aspx

 

Karan Gulati
SE, Microsoft Analysis Services

SQL 2005 (SQL 2008) Analysis Services Server Side Tracing

$
0
0
With Analysis Services 2005 (2008) you can collect traces without using gui this feature is known as Server Side Tracing. In this article we’ll talk about how to achieve Server Side Tracing. How to create a Server Side Trace Open SQL Server profiler connect...(read more)

Why do we need SPN for File Server (NAS / RAS / File Share System) DNS Alias (Cname)

$
0
0
Very often we use UNC location while taking backup of SQL or Analysis Services Database or try to upload data in SQL Tables using bulk load command where text or csv file located in File Server so we access it by giving UNC. Sometimes File Server Machine...(read more)

Troubleshooting : Error: 8624, Severity: 16, State: 21. Internal Query Processor Error: The query processor could not produce a query plan.

$
0
0

Sometimes we see the below error in SQL Server 2008 R2 SP1 when executing a query in SQL Server Management Studio:

————————————————————–

Error: 8624, Severity: 16, State: 116.

Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services.

————————————————————–

 

So, how to handle this situation? I am using SQL Server 2008 R2 SP1 specifically, because some bugs related to the above problem has been fixed in SQL Server 2005 SP2:

http://support.microsoft.com/kb/931329

 

As I am using SQL Server 2008 R2 SP1, so I should not get this error message, but still I am getting this. So, maybe I am doing something wrong which is not the above bug but something different. Let’s check.

 

I have used the below query (well, the logic of the query is not important here) in my SSMS:

—————————————————————

select * from

(select c2 "c7", c4 "c8", COUNT(1) "c9"

from Table1

group by c2, c4) Table2

where (c7) not in

(select c7 from

(select c2 "c7", c4 "c8", COUNT(1) "c9"

from Table1

group by c2, c4) Table2,

(select c7 "c11", COUNT(c7) "c12" from

(select c2 "c7", c4 "c8", COUNT(1) "c9"

from Table1

group by c2, c4) Table2

group by c7 having count(c7)=1) Table3

where Table2.c7=Table3.c11 and Table2.c9>1)

—————————————————————

 

The query failed with:

Error: 8624, Severity: 16, State: 116

Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services

 

The error message means that the optimizer couldn’t generate the query plan at all.

 

But why? What’s wrong with the optimizer? The same optimizer is producing plans for the other queries.

First of all, please check whether the stats are updated with Full Scan as the optimizer replies on the stats for generating the best execution plan.

We can update them by using the below command for all the statistics created in the associated tables:

—————————————————————

UPDATE STATISTICS SchemaName.TableName(StatsName) WITH FULLSCAN;

—————————————————————

 

After updating the stats try to run the query. If it still gives the same error, then the cause of the above error can be:

1.      The SET operators are not correctly set in the instance.

2.      This query is really complex and optimizer timed out while generating the plan.

3.      Maybe we didn’t write the query in optimal way.

4.      Maybe we are missing some indexes or statistics from the database.

 

Now, the point no. 3 is something which the developers should look into to make the query simpler and optimized.

So, I will focus on point no. 1, 2 and 4 because these are the points which we can improve as a DBA.

 

If we are getting this error, then we should start from checking the non-default SET options.

We can get this from TextData column value in profiler traces (we need to select ‘Existing Connection’ and ‘Audit Login’ events).

We can also get the values from SSMS (if we are using that as client to execute the query):

Click on ‘Tools’ in SSMS à ‘Options’ à Expand ‘Query Execution’ à Expand ‘SQL Server’ à Click on ‘Advanced’

 

clip_image001[6]

 

 

The default SET OPTIONS for a SSMS Query Window executing a query are:

1.      quoted_identifier

2.      arithabort

3.      ansi_null_dflt_on

4.      ansi_warnings

5.      ansi_padding

6.      ansi_nulls

7.      concat_null_yields_null

 

(Note: To check for the correct SET operators to ensure that we have correct connection options:

http://msdn.microsoft.com/en-us/library/ms175088(v=sql.105).aspx)

 

If all the SET options are set correctly, so now we know that the optimizer is not able to produce the query plan because of the complexity (join order, where clause, nested tables etc.), so we need to tell the optimizer not to use its own logic and just follow the join order which we specified in the query and produce the query plan (may be a bad plan but just produce it).

We can achieve the above by specifying a clause at the end of the query: OPTION (FORCE ORDER)

 

So, I added that clause at the end of the complex query:

—————————————————————

select * from

(select c2 "c7", c4 "c8", COUNT(1) "c9"

from Table1

group by c2, c4) Table2

where (c7) not in

(select c7 from

(select c2 "c7", c4 "c8", COUNT(1) "c9"

from Table1

group by c2, c4) Table2,

(select c7 "c11", COUNT(c7) "c12" from

(select c2 "c7", c4 "c8", COUNT(1) "c9"

from Table1

group by c2, c4) Table2

group by c7 having count(c7)=1) Table3

where Table2.c7=Table3.c11 and Table2.c9>1)

OPTION (FORCE ORDER)

—————————————————————

 

Now, executed the query and it gave the result. It means that the optimizer produced the query plan (may be a bad plan, but at least it produced that).

It means that now we can run the DTA against this query.

 

The rest of the thing is pretty simple. Just execute the query in DTA by right click on the query window and selecting the option “Analyze query in Database Engine Tuning Advisor”.

 

clip_image003[6]

 

It will give all the recommendations of missing indexes and missing stats.

Create them and then run the query again after removing the OPTION (FORCE ORDER) clause from the end of the query.

(Warning: Implement the indexes in a test environment first and test thoroughly to check any performance benefit. If you get the desired performance benefit, then only implement in production.)

 

(Note: If you are still not able to execute the query, then please contact Microsoft Support.)

Written by:
Sandipan Pratihar – Support Engineer, SQL Server Support

Reviewed by:
Balmukund Lakhani – Support Escalation Engineer, SQL Server Support

SQL Server 2014 Setup Error: ‘BIDS’ is not a valid value for setting ‘FEATURES’

$
0
0

SQL Server Data Tools (a.k.a. SSDT) used to part of product installation media in SQL Server 2012. We can install it by selecting "SQL Server Data Tools" from shared features section.

clip_image002

Alternately we can install it using SQL Server 2012 setup.exe using command prompt

setup.exe /QUIET /IACCEPTSQLSERVERLICENSETERMS /ACTION=Install /FEATURES="BIDS" /INDICATEPROGRESS=True.

But in SQL Server 2014, BIDS option is not available to install as a feature, even installation guide documentation describes it http://msdn.microsoft.com/en-IN/library/ms144259.aspx

So in SQL Server 2014, if we try installing SSDT (SQL Server Data Tools) /BIDS (Business Intelligent Development Studio) via command prompt using say following command,

setup.exe /QUIET /IACCEPTSQLSERVERLICENSETERMS /ACTION=Install /FEATURES="BIDS" /INDICATEPROGRESS=True


Then setup fails with following error,

Exception type: Microsoft.SqlServer.Chainer.Infrastructure.InputSettingValidationException
Message:
‘BIDS’ is not a valid value for setting ‘FEATURES’. Refer to Help for more information.
HResult : 0x84b40002
FacilityCode : 1204 (4b4)
ErrorCode : 2 (0002)
Data:
SQL.Setup.FailureCategory = InputSettingValidationFailure
DisableWatson = true
Stack:
at Microsoft.SqlServer.Chainer.Infrastructure.InputSettingService.LogAllValidationErrorsAndThrowFirstOne(ValidationState vs)
at Microsoft.SqlServer.Configuration.BootstrapExtension.ValidateChainerSettingAction.ExecuteAction(String actionId)
at Microsoft.SqlServer.Chainer.Infrastructure.Action.Execute(String actionId, TextWriter errorStream)
at Microsoft.SqlServer.Setup.Chainer.Workflow.ActionInvocation.<>c__DisplayClasse.<ExecuteActionWithRetryHelper>b__b()
at Microsoft.SqlServer.Setup.Chainer.Workflow.ActionInvocation.ExecuteActionHelper(ActionWorker workerDelegate)
The following error occurred:
‘BIDS’ is not a valid value for setting ‘FEATURES’. Refer to Help for more information.
Error result: -2068578302
Result facility code: 1204
Result error code: 2
Please review the summary.txt log for further details
———————————————————————-
Error result: -2068578302
Result facility code: 1204
Result error code: 2
SQM Service: Sqm does not have active session.

We can confirm that from launching the GUI for SQL Server 2014 installation also.

clip_image004

The reason for failure is that SQL Server 2014 installation media doesn’t ship SSDT/BIDS. But creating business intelligence project requires SSDT and it would be available as separate download with no cost.
There are two versions of SSDT are available,

  1. SSDT, which is now available as Add-in to visual studio and along with existing functionality of source control, change tracking, schema compare, database refactoring, creation of database objects, MSBuild support and Azure database support is also provided.
  2. SSDT-BI, it’s a replacement of BIDS which provides functionality to develop Integration Services packages, Analysis Services cubes, and Reporting Services reports.

Both of the above tools SSDT and SSDT-BI are available in two versions, depending upon visual studio version.

SSDT – SQL Server tooling in Visual Studio 2013 and SSDT Visual Studio 2012
SSDT-BI – SSDT-BI for Visual Studio 2012 and SSDT-BI for Visual Studio 2013

Both the versions are available for download at no cost and you can download suitable version from http://msdn.microsoft.com/en-us/hh297027.aspx

Written by:
Santosh Premkumar Mahajan – SQL Server Support Engineer
Reviewed by:
Pradipta Das, Vikas Rana – Technical Advisor – SQL Server Support Team

Merge Replication: Expired Subscription Clean Up Job / sp_expired_subscription_cleanup / sp_MScleanup_conflict fails with error Msg 1934, Level 16, State 1 AND Msg 20709, Level 16, State 1, Procedure sp_MScleanup_conflict…??

$
0
0

 

Issue:

If you have a Merge Replication, you may encounter the below error message thrown by Expired Subscription Cleanup job (sp_expired_subscription_cleanup) with a mention of conflict table of an article for the publication as below:

Msg 1934, Level 16, State 1, Line 1

DELETE failed because the following SET options have incorrect settings:

‘ANSI_NULLS, QUOTED_IDENTIFIER’.

Verify that SET options are correct for use

with indexed views

and/or indexes on computed columns

and/or filtered indexes

and/or query notifications

Technorati Tags:

and/or XML data type methods and/or spatial index operations.

Msg 20709, Level 16, State 1, Procedure sp_MScleanup_conflict, Line 66

The merge process could not clean up the

conflict table "[MSmerge_conflict_<Article_Name>]" for publication "<Publication_Name>".

Troubleshooting Steps:

Before we get into troubleshooting this issue, let’s understand what happen when a cleanup job runs. When a cleanup job runs it fires the stored procedure –> sp_expired_subscription_cleanup which calls –> sp_MScleanup_conflict –> which fires delete statement on conflict table.

As per the error message, delete is failing due to incorrect setting for ‘ANSI_NULLS, QUOTED_IDENTIFIER. Error message suggests us to verify that these set options can be used for -

1. indexed views

2. and/or indexes on computed columns

3. and/or filtered indexes

4. and/or query notifications

5. and/or XML data type methods

6. and/or spatial index operations.

We validated that conflict table doesn’t have any of the above mentioned options. While checking the schema of conflict table we found that it has 2 computed columns which were persisted. We also checked the SET option for ‘ANSI_NULLS, QUOTED_IDENTIFIER’ in the sp_MScleanup_conflict and found that it has been created with below –

/****** Object: StoredProcedure [sys].[sp_MScleanup_conflict] Script Date: 11/5/2014 6:31:15 AM ******/

SET ANSI_NULLS OFF

GO

SET QUOTED_IDENTIFIER OFF –> The set option in the execution context while creating the stored procedure is set to OFF

GO

So from above it appears that, when we have persisted computed column and if we try to do delete on that table when ANSI_NULLS and/or QUOTED_IDENTIFIER is turned OFF then we can get into this issue. To confirm this, we did below test (keeping replication out of the picture)

– Create a test database

create database test_persist

– Create a test table

use test_persist

CREATE TABLE test (ID INT,

FirstName VARCHAR(100),

LastName VARCHAR(100))

GO

– Add Computed Column PERSISTED

ALTER TABLE dbo.test ADD

FullName_P AS (FirstName+’ ‘+LastName) PERSISTED

GO

–insert data into the test table

insert into test(FirstName, LastName) values(‘Shreya’,’M’)

–Checked if computed column is persisted

SELECT is_persisted FROM sys.computed_columns

WHERE object_id = OBJECT_ID(‘test ‘)

Result:

is_persisted

1

–Performed a delete statement on the table:

USE [test_persist]

GO

SET QUOTED_IDENTIFIER OFF

GO

DELETE FROM [dbo].[test]

GO

–Ta..da..! Got the below error as expected:

Msg 1934, Level 16, State 1, Line 1

DELETE failed because the following SET options have incorrect settings: ‘QUOTED_IDENTIFIER’. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations.

 

To reproduce this issue for Merge Replication we followed below steps and we were able to reproduce it:

· Created a merge replication with the above table ‘test’ as article.

· To intentionally expire the subscription so that cleanup job cleans up by deleting the data from conflict table:

· Changed the retention period of the publication to 1 minute.

· Disabled the Merge agent.

· After around 5 minutes ran the Expired Subscription Cleanup job and got the following error message in the job history:

 

Date 11/5/2014 5:56:09 AM

Log Job History (Expired subscription clean up)

Step ID 1

Server SHREYA-SERVER\SQL2012

Job Name Expired subscription clean up

Step Name Run agent.

Duration 00:00:01

Sql Severity 16

Sql Message ID 20709

Operator Emailed

Operator Net sent

Operator Paged

Retries Attempted 0

Message

Executed as user: NT AUTHORITY\SYSTEM. DELETE failed because the following SET options have incorrect settings: ‘ANSI_NULLS, QUOTED_IDENTIFIER’. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations. [SQLSTATE 42000] (Error 1934) The merge process could not clean up the conflict table "[MSmerge_conflict_persist_pub_test]" for publication "persist_pub". [SQLSTATE 42000] (Error 20709). The step failed.

 

Unfortunately neither our documentation (about SET QUOTED_IDENTIFIER) nor the error message is explicit about DML operation on persisted column can cause above error.

Below is our documentation:

Concept of SET QUOTED_IDENTIFIER:

SET QUOTED_IDENTIFIER must be ON when you are creating or changing indexes on computed columns or indexed views. If SET QUOTED_IDENTIFIER is OFF, CREATE, UPDATE, INSERT, and DELETE statements on tables with indexes on computed columns or indexed views will fail. For more information about required SET option settings with indexed views and indexes on computed columns, see "Considerations When You Use the SET Statements" in SET Statements (Transact-SQL).

Reference:

http://msdn.microsoft.com/en-us/library/ms174393.aspx

‘Persist’ property of computed column:

Unless otherwise specified, computed columns are virtual columns that are not physically stored in the table. Their values are recalculated every time they are referenced in a query. The Database Engine uses the PERSISTED keyword in the CREATE TABLE and ALTER TABLE statements to physically store computed columns in the table. Their values are updated when any columns that are part of their calculation change. By marking a computed column as PERSISTED, you can create an index on a computed column that is deterministic but not precise. Additionally, if a computed column references a CLR function, the Database Engine cannot verify whether the function is truly deterministic. In this case, the computed column must be PERSISTED so that indexes can be created on it. For more information, see Creating Indexes on Computed Columns.

Reference:

http://technet.microsoft.com/en-us/library/ms191250(v=SQL.105).aspx

Summary:

To CREATE, UPDATE, INSERT, and DELETE statements on the tables with either of the above properties the SET QUOTED_IDENTIFIER is supposed to be ON.

In Merge replication, sp_MScleanup_conflict (where delete is fired for conflict table) is by design compiled with SET QUOTED_IDENTIFIER OFF. Hence the sp_expired_subscription_cleanup (delete statement) errors out with above error because of the incompatibility of the SET option for the QUOTED_IDENTIFIER with above mentioned options.

Workaround:

If you are getting same error then check for the properties of the table in question. You will find out either of the 7 conditions is true.

If so drop the article in question from the Publication.

Refer following link for steps: http://technet.microsoft.com/en-us/library/ms152493(v=sql.110).aspx

Alter the table drop the property of the table you found mentioned in the conditions in the error message.

Ex: To drop the “persist” property for the column from the table:

ALTER TABLE <table name> ALTER COLUMN <computed column> DROP PERSISTED

Add the article back to the Publication:

Please note: Adding an article involves: adding the article to the publication; creating a new snapshot for the publication; synchronizing the subscription to apply the schema and data for the new article.

Refer following link for detailed steps: http://technet.microsoft.com/en-us/library/ms152493(v=sql.110).aspx

 

Written by:
Shreyanka Mathapati – Support Engineer, SQL Server Support

Reviewed by:

Devashish Salgaonkar – Technical Lead, SQL Server Support
Akbar Farishta – Escalation Engineer, SQL Server Support


Script level upgrade for master database failed

$
0
0

 

In this post I would like to explain one of the interesting issues that I encountered while upgrading a SQL Server Instance.

Symptoms

· SQL Server instance is upgraded using a service pack or any other update.

· After the upgrade SQL server service starts but it stops in next few seconds.

· When we verify the SQL Errorlog we get the below errors:

2014-11-19 22:06:47.63 spid7s      Creating sp_ExternalMailQueueListener
2014-11-19 22:06:47.64 spid7s      Creating sp_sysmail_activate
2014-11-19 22:06:47.66 spid7s      Error: 15138, Severity: 16, State: 1.
2014-11-19 22:06:47.66 spid7s      The database principal owns a schema in the database, and cannot be dropped.
2014-11-19 22:06:47.66 spid7s
Error: 912, Severity: 21, State: 2.
2014-11-19 22:06:47.66 spid7s because upgrade step ‘sqlagent100_msdb_upgrade.sql’ encountered error 15138, state 1, severity 16. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the ‘master’ database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.
2014-11-19 22:06:47.66 spid7s Error: 3417, Severity: 21, State: 3.
2014-11-19 22:06:47.66 spid7s Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.
2014-11-19 22:06:47.66 spid7s      SQL Trace was stopped due to server shutdown. Trace ID = ‘1’. This is an informational message only; no user action is required.

Note You may receive the following error message when you connect to the instance of SQL Server 2008 R2 in SQL Server Management Studio when the SQL is performing the database upgrade:

Error: 18401
Login failed for user ‘<login name>‘. Reason: Server is in script upgrade mode. Only administrator can connect at this time.

 

Cause

· The SQL server is trying to perform the database upgrade by executing the ‘sqlagent100_msdb_upgrade.sql’

· This script can be found at  “C:\Program Files\Microsoft SQL Server\MSSQLxx.yyyy\MSSQL\Install

 

Where xx stands for SQL version

SQL Server 2008 –> 10

SQL Server 2008 R2 –> 10_50

And yyyy stands for Instance_ID

· The script try to drop and recreate few roles and objects and it fails which dropping the role “‘DatabaseMailUserRole’

   Script Snippet

:
  ————————————————————–
   — Database Mail roles and permissions
   ————————————————————–
   — Create the DatabaseMailUserRole role
   IF (EXISTS (SELECT *
               FROM msdb.dbo.sysusers
               WHERE (name = N’DatabaseMailUserRole’)
                 AND (issqlrole = 1)))
   BEGIN
     — If there are no members in the role, then drop and re-create it
     IF ((SELECT COUNT(*)
          FROM msdb.dbo.sysusers   su,
               msdb.dbo.sysmembers sm
          WHERE (su.uid = sm.groupuid)
            AND (su.name = N’DatabaseMailUserRole’)
            AND (su.issqlrole = 1)) = 0)
     BEGIN
EXECUTE msdb.dbo.sp_droprole @rolename = N’DatabaseMailUserRole’   ****************>> Point of failure
EXECUTE msdb.dbo.sp_addrole @rolename = N’DatabaseMailUserRole’
     END
   END
   ELSE
     EXECUTE msdb.dbo.sp_addrole @rolename = N’DatabaseMailUserRole’ 

The above query fails as role "DatabaseMailUserRole" owns a custom(user-created) schema.

 

Resolution

  • Started the SQL with traceflag -T902

Traceflag 902 will skip the database upgrade process which allows the users to login.

cmd

C:\Windows\system32>net start mssql$SQL2008R2 /T902

The SQL Server (SQL2008R2) service is starting..

The SQL Server (SQL2008R2) service was started successfully.

  • Check the consistency of the MSDB database and take a full database backup.

DBCC CHECKDB (MSDB)
GO

BACKUP DATABASE [msdb] TO DISK = N’C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQL2008R2\MSSQL\Backup\msdb.bak’
WITH NOFORMAT, NOINIT,  NAME = N’msdb-Full Database Backup’, SKIP, NOREWIND, NOUNLOAD, STATS = 1

GO

  • Verify if the role “’DatabaseMailUserRole’” owns any user schemas

–To find the list of schema owned by database role ‘DatabaseMailUserRole’

select sch.name as [Schema-Name], dbpri.name as [Schema-Owner]
from sys.schemas as sch
inner join sys.database_principals  dbpri  on dbpri.principal_id = sch.principal_id
where sch.name like ‘DatabaseMailUserRole’
or dbpri.name like ‘DatabaseMailUserRole’

Sample output:

————————————
Schema-Name           Schema-Owner
————————————

DatabaseMailUserRole  DatabaseMailUserRole
A_Custom_Schema       DatabaseMailUserRole

(2 row(s) affected)

  • Create a test role and make it the owner of the custom schema which was owned by “DatabaseMailUserRole

USE [msdb]
GO

CREATE ROLE [A_Test_Role]
GO

ALTER AUTHORIZATION ON SCHEMA::[A_Custom_Schema] TO [A_Test_Role]
GO

  • After the above modification start the SQL Service normally without any additional trace flag. This time the SQL completes the database upgrade successfully.
  • Revert the change which we made if the application requires it.

ALTER AUTHORIZATION ON SCHEMA::[A_Custom_Schema] TO [DatabaseMailUserRole]
GO
DROP ROLE [A_Test_Role]
GO

Related articles:

Written by:

Raghavendra Srinivasan, Support Engineer, SQL Server Support

Reviewed by:

Vijay Rodrigues, Support Escalation Engineer, SQL Server Support

Technorati Tags: ,,

Capture successful logins: You may see many entries(high volume) when you create Audit for SUCCESSFUL_LOGIN_GROUP action group

$
0
0

 

When you are auditing successful logins attempts using SUCCESSFUL_LOGIN_GROUP action group through SQL Audit, you will see that there will be too many entries in the audit file.

You may see this behavior for successful logins when connections are made through SQL Server Management Studio. This occurs because for every successful login through SSMS you will have other corresponding connections made at the background. For example

•   Object Explorer
•   Query Editor Window
•   Solution Explorer etc.

Hence successful logins for every such connection will also be recorded.

This holds true if you make successful login attempts using:

- Extended event
- SQL error log
- Profiler
- Audits
Whereas you will observe only entry recorded when connecting from sqlcmd.

Workaround:

Unfortunately SQL Auditing does not have filtering option to remove connections coming from SSMS – object explorer etc.  To achieve this you can use Extended Events with a filter on the ‘client_app_name’ field with Operator – ‘Not like’ and Value field -  ‘Microsoft SQL Server Management Studio’

Program Name – Microsoft SQL server Management Studio — (for the internal connections like object explorer etc.)
Program Name – Microsoft SQL Server Management Studio – Transact-SQL IntelliSense — (for IntelliSense)
Program Name – Microsoft SQL Server Management Studio – Query — (For queries run via SSMS)

Program Name can be identified either by using SQL Profiler or by using following query:

 

image

 

image

Using Extended Events:

Information on how to use Extended Events:

http://msdn.microsoft.com/en-us/library/bb630282(v=sql.110).aspx

http://blogs.msdn.com/b/microsoft_press/archive/2012/03/21/from-the-mvps-a-gui-for-extended-events-in-sql-server-2012.aspx

 

Written by:
Shreyanka Mathapati – Support Engineer, SQL Server Support

Reviewed by:

Devashish Salgaonkar – Technical Lead, SQL Server Support
Akbar Farishta – Escalation Engineer, SQL Server Support

Common issues faced while configuring AlwaysOn listener on Azure VM cluster

$
0
0

We have MSDN article [Tutorial: Listener Configuration for AlwaysOn Availability Groups (http://msdn.microsoft.com/en-us/library/dn425027.aspx) which provides detailed steps to configure AlwaysOn listener on Azure VM cluster.
This article provides steps that can be followed to resolve commonly faced issues.

Important consideration: All SQL Server virtual machines on Azure should be in the SAME cloud service for AlwaysOn configuration with listener. This can be verified from Azure portal. For example, in below screen shot Node1 and Node 2 are in same cloud service, as required. This requirement is applicable when all virtual machines are in same Azure region.

clip_image001

Issue-1

Executing the below command might not return/display any progress or results at the end of execution, but the endpoints are not created. We might also see the below warnings in some cases.

PS C:\>Get-AzurePublishSettingsFile
PS C:\>Import-AzurePublishSettingsFile -PublishSettingsFile C:\temp\Newcredentials.publishsettings

PS C:\># Define variables
PS C:\>$AGNodes = “SQLVM2″,”SQLVM3″ # all availability group nodes should be included, separated by commas
PS C:\>$ServiceName = “testservices” # the name of the cloud service that contains the availability group nodes
PS C:\>$EndpointName = “HDEAL25_EP” # name of the endpoint
PS C:\>$EndpointPort = “14333” # public port to use for the endpoint
PS C:\>
PS C:\># Configure a load balanced endpoint for each node in $AGNodes, with direct server return enabled
PS C:\>ForEach ($node in $AGNodes)
PS C:\>{
PS C:\>Get-AzureVM -ServiceName $ServiceName -Name $node | Add-AzureEndpoint -Name $EndpointName -Protocol “TCP” -PublicPort $EndpointPort -LocalPort $EndpointPort -LBSetName “$EndpointName-LB” -ProbePort 59999 -ProbeProtocol “TCP” -DirectServerReturn $true | Update-AzureVM
PS C:\>}

Error /Warning:

WARNING: No deployment found in service: ‘testservices’.
WARNING: No deployment found in service: ‘testservices’.

Possible causes:

•   This can occur if the windows account which is used for downloading the “publishsettings” file might have multiple subscriptions

•   The command cannot find the Virtual Machines in the default subscription for the windows account and the configuration fails.

How to find if the account has multiple subscriptions?

Get-AzureSubscription | Select SubscriptionName,IsDefault   # Get list of subscriptions for the account
Get-AzureSubscription -current  | Select SubscriptionName  # Find the currently active account for this connection
Get-AzureSubscription -default  | Select SubscriptionName  # Find the Default subscription for this connection
Get-AzureVM #Get the list of Virtual machines & its services

clip_image002

Resolution

•   Select the required subscription and verify the Virtual Machines under that subscription. Finally continue with the required deployment/configurations.

Select-AzureSubscription “Subscription_Name_Here

clip_image002[6]

•   We can also change the default subscription using the below command.

Select-AzureSubscription “Subscription_Name_Here” –Default

clip_image002[8]

 

Issue-2

•   In one of the scenario we followed the steps (8th point of 1st step) mentioned in the article and executed the below commands to create the load-balanced VM endpoints.

PS C:\> Get-AzurePublishSettingsFile
PS C:\> Import-AzurePublishSettingsFile -PublishSettingsFile C:\temp\credentials.publishsettings
VERBOSE: Setting: azure-Prod_admin as the default and current subscription. To view other subscriptions use Get-AzureSubscription

1)  # Define variables
2)  $AGNodes = “Node1″,”Node2″ # all availability group nodes should be included, separated by commas
3)  $ServiceName = “SQLProd.cloudapp.net” # the name of the cloud service that contains the availability group nodes
4)  $EndpointName = “MySQLEndpoint” # name of the endpoint
5)  $EndpointPort = “59636” # public port to use for the endpoint
6) 
7)  # Configure a load balanced endpoint for each node in $AGNodes, with direct server return enabled
8)  ForEach ($node in $AGNodes)
9)  {
10) Get-AzureVM -ServiceName $ServiceName -Name $node | Add-AzureEndpoint -Name $EndpointName -Protocol “TCP” -PublicPort $EndpointPort -LocalPort $EndpointPort -LBSetName “$EndpointName-LB” -ProbePort 59999 -ProbeProtocol “TCP” -DirectServerReturn $true | Update-AzureVM
11) }

Get-AzureVM : BadRequest: The hosted service name is invalid.
At line:3 char:5
+     Get-AzureVM -ServiceName $ServiceName -Name $node | Add-AzureEndpoint -Name  …
+     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : CloseError: (:) [Get-AzureVM], CloudException
    + FullyQualifiedErrorId : Microsoft.WindowsAzure.Commands.ServiceManagement.IaaS.GetAzureVMCommand
Get-AzureVM : BadRequest: The hosted service name is invalid.
At line:3 char:5
+     Get-AzureVM -ServiceName $ServiceName -Name $node | Add-AzureEndpoint -Name  …
+     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : CloseError: (:) [Get-AzureVM], CloudException
    + FullyQualifiedErrorId : Microsoft.WindowsAzure.Commands.ServiceManagement.IaaS.GetAzureVMCommand

PS C:\>

Note:- Line number added for explanation & more clarity

•   It also provides the line number which shows the point of failure, the variable $ServiceName in the error output.

Error:

Get-AzureVM : BadRequest: The hosted service name is invalid.
At line:3 char:5
+     Get-AzureVM -ServiceName $ServiceName -Name $node | Add-AzureEndpoint -Name  …
+     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : CloseError: (:) [Get-AzureVM], CloudException
    + FullyQualifiedErrorId : Microsoft.WindowsAzure.Commands.ServiceManagement.IaaS.GetAzureVMCommand

 

•   We noticed that the value provided for cloud service was in the format of FQDN (Fully qualified domain name) of the service name/DNS name, where as it was expecting only the name of the service which is “SQLProd” in our case.

•   The command executed successfully once we modified the value provided for variable $ServiceName

ResolutionAmended command which executed successfully:

PS C:\> Get-AzurePublishSettingsFile

Technorati Tags:


PS C:\> Import-AzurePublishSettingsFile -PublishSettingsFile C:\temp\credentials.publishsettings
VERBOSE: Setting: azure-Prod_admin as the default and current subscription. To view other subscriptions use Get-AzureSubscription
# Define variables
$AGNodes = “Node1″,”Node2″ # all availability group nodes should be included, separated by commas
$ServiceName = “SQLProd” # the name of the cloud service that contains the availability group nodes
$EndpointName = “MySQLEndpoint” # name of the endpoint
$EndpointPort = “59636” # public port to use for the endpoint

# Configure a load balanced endpoint for each node in $AGNodes, with direct server return enabled
ForEach ($node in $AGNodes)
{
Get-AzureVM -ServiceName $ServiceName -Name $node | Add-AzureEndpoint -Name $EndpointName -Protocol “TCP” -PublicPort $EndpointPort -LocalPort $EndpointPort -LBSetName “$EndpointName-LB” -ProbePort 59999 -ProbeProtocol “TCP” -DirectServerReturn $true | Update-AzureVM
}

 

clip_image002[10]

Summary: To conclude we should specify ONLY the name of Cloud service to which the node belongs and not the FQDN.

 

Issue-3

•   We usually tend to make mistake at point #8 for “STEP-4 Create the availability group listener” and we get the below error:

PS C:\> # Define variables
PS C:\> $ClusterNetworkName = “WinAOCluster” # the cluster network name
PS C:\> $IPResourceName = “AGListenerProd_10.0.0.55″ # the IP Address resource name
PS C:\> $CloudServiceIP = “135.100.100.100” # IP address of your cloud service
PS C:\>
PS C:\> Import-Module FailoverClusters
PS C:\> Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{“Address”=”$CloudServiceIP”;”ProbePort”=”59999″;SubnetMask=”255.255.255.255″;”Network”=”$ClusterNetworkName”;”OverrideAddressMatch”=1;”EnableDhcp”=0}

Error:
Set-ClusterParameter : Unable to save property changes for ‘AGListenerProd_10.0.0.55′.
    The cluster network was not found

At line:1 char:39
+ Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{“Address” …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Set-ClusterParameter], ClusterCmdletException
    + FullyQualifiedErrorId : Set-ClusterParameter,Microsoft.FailoverClusters.PowerShell.SetClusterParameterCommand
PS C:\>
PS C:\>

Possible Cause:

•   The error says it failed for the value supplied for “$IPResourceName” but the mistake is the previous line “$ClusterNetworkName”
•   The description for this parameter is “the cluster network name”. This is not the name of the Windows Cluster.
•   We need to provide the name of the network used in the windows cluster. Ex “Cluster Network 1”

How to find if the “Network Name” for this configuration

1.  Connect to the Failover Cluster Manager and expand the NETWORKS section, we will see the network used for the current cluster subnet.

clip_image002[12]

 

2.  Use the PowerShell command:

PS C:\ > Get-ClusterNetwork

Name                 State
—————–    —–
Cluster Network 1    Up

ResolutionAmended command which executed successfully:

PS C:\> # Define variables
PS C:\> $ClusterNetworkName = “Cluster Network 1″# the cluster network name
PS C:\> $IPResourceName = “AGListenerProd_10.0.0.55″ # the IP Address resource name
PS C:\> $CloudServiceIP = “23.100.95.15”# IP address of your cloud service
PS C:\> Import-Module FailoverClusters
PS C:\> Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{“Address”=”$CloudServiceIP”;”ProbePort”=”59999″;SubnetMask=”255.255.255.255″;”Network”=”$ClusterNetworkName”;”OverrideAddressMatch”=1;”EnableDhcp”=0}
WARNING: The properties were stored, but not all changes will take effect until AGListenerProd_10.0.0.55 is taken offline and then online again.
PS C:\>

 

clip_image002[14]

 

Written by:
Raghavendra Srinivasan – Support Engineer, SQL Server Support

Reviewed by:

Vijay Rodrigues – Escalation Engineer, SQL Server Support

Snapshot agent fails with Error: 241, Severity: 16, State: 1 (Conversion failed when converting date and/or time from character string)

$
0
0

ISSUE

Let’s say you are monitoring Replication Agent status and noticed that the Snapshot Agent has failed with the following error while generating a snapshot, particularly during creation of BCP files.

Error: 241, Severity: 16, State: 1

Conversion failed when converting date and/or time from character string.

To further investigate the above error and to understand which query is causing it, run a SQL Profiler trace in the background.

CAUSE

In order to understand the cause of the issue, it’s important to look at the schema structure and the column on which indexing is done. So let’s take a look at a schema script for a sample table “product” which has been published for Transactional Replication.

CREATE TABLE [dbo].[ product] (

                [ItemID] [int] IDENTITY(1, 1) NOT FOR REPLICATION NOT NULL

                ,[ReportDate] [date] NOT NULL

                ,[ProviderID] [int] NOT NULL

                ,[InvoiceImmediately] [bit] NOT NULL

                ,CONSTRAINT [PK_ItemUnbilled] PRIMARY KEY NONCLUSTERED ([ItemID] ASC)

                )

CREATE CLUSTERED INDEX [IX_ItemUnbilled_ReportDate] ON [dbo].[ product] (

                [ReportDate] ASC

                ,[InvoiceImmediately] ASC

                ,[ProviderID] ASC

                )

So the primary key (which is non-clustered index) is on [ItemID] column, and the clustered composite index is on three columns with the leading column as [ReportDate]. Before the error is raised, you will see something like the following query getting executed.

Select * from [dbo].[syncobj_0x4434414538323435] where ([ReportDate] is null) or ([ReportDate] <= N’31/01/2014 00:00:00′) order by [ReportDate] ASC,[InvoiceImmediately] ASC,[ProviderID] ASC

The “Where”  clause we see here in the SELECT statement is because of BCP partitioning and the BCP file partitioning was done based on the [ReportDate] column (the leading column in the clustered index key). [ReportDate] column is of data type Date. The conversion fails because of the difference in language used by the account used to run the snapshot agent which was  “UK English” – (dd/mm/yyyy)  in this case, and the default language of SQL Server is “ US English “- (mm/dd/yyyy).

WORKAROUNDS

Now there are multiple easy workarounds available for the problem in question. I have listed the workarounds below.

A) Use a different account, for which the default language setting is same as “ US ENGLISH” to run the Snapshot agent. Alternatively if possible, change the default language setting of the existing account to “ US English”.

Note: This is the most preferred method.

B) You can disable BCP partitioning only for the problematic table by using static row filters. We do not go for BCP partitioning for tables that are filtered. Thus you can have a static row filter on the table in question but still retrieve all the rows without triggering BCP partition, using a query like, “Select * from table where 1=1”.

Note: BCP partitioning will be disabled only for the table in question not the entire publication.

C) If @sync_method property of publication is set to “Native”, it’s recommended to change it to the default option of ”Concurrent”. This will be helpful if the snapshot agent is picking up a Date type column for BCP partitioning just because it has clustered index on it and the actual primary key column in the table is non-clustered. In the default setting, which is “Concurrent” , BCP partitioning happens only if the primary key column has a clustered index.

Exec sp_addpublication —@sync_method = N’concurrent’—-

Note: If the primary key column does not have clustered index and the @sync_method is set to “concurrent” then the BCP partitioning will be disabled for the entire publication. You might find snapshot agent taking more time to generate snapshot if there is any other big table in the same publication.

D) You can disable BCP partitioning for the entire publication by using the -EnbaleArticleBCPPartitioning 0 in the Snapshot Agent job step properties. Please refer to Paul Ibison’s blog for more details: http://www.replicationanswers.com/BCPPartitioning.asp.

Disclaimer: Links to third-party web sites are provided for information purposes only. Microsoft does not endorse nor support the content in third-party links. Microsoft is not responsible for the content of a third-party website. Privacy and security policies may differ from those practiced by Microsoft.

Note: BCP partitioning will be disabled for the entire publication; you might find snapshot agent taking more time to generate snapshots if there is any other big table in the same publication.

E) The last option is to ensure that the primary key column has a clustered index and is not of data type Date.

Note: This needs schema level changes.

While I strongly suggest using the first workaround because it’s the recommended approach, I am eager to hear about the workaround you choose for your environment and the reasons for your choice. So please state the details in the comments section.

Author:

Madhumita Tripathy , Support Engineer, Microsoft India GTSC

Reviewed by:

Rishi Maini, Escalation Engineer, Microsoft India GTSC

Vikas Rana, Technical Advisor, Microsoft India GTSC

How to find which user deleted the user database in SQL Server

$
0
0

 

In one of the recent scenarios we noticed that a user database was deleted and customer wanted to which user has dropped the database. We know that multiple user had full access on that database.
In this post I’ll be sharing the steps to find the details of user who drop the database.

Method-1

  • Connect the SQL Server instance using management studio
  • Right-click on the instance and select

“Reports”—“Standard Reports”—“Schema Changes History”

  • We get a report of schema changes for all databases from which we can get the user account which was used to delete/drop the database

clip_image002[4]

Sample output:

clip_image002[6]

Note: This report doesn’t contain the details of application or the server from which the DROP statement was executed.

Method – 2:

  • Get the location of SQL Errorlog using one of the below commands

sp_readerrorlog 0,1,’Logging SQL Server messages in file’

go

Sample output:

LogDate                 ProcessInfo  Text

———————– ———— —————————————————————————————————————

2015-01-09 15:31:31.330 Server       Logging SQL Server messages in file ‘C:\Program Files\Microsoft SQL Server\MSSQL12.SQL14\MSSQL\Log\ERRORLOG’.

–or

select SERVERPROPERTY(‘errorlogfilename’)

go

Sample output:

———————————————————————————————————–

C:\Program Files\Microsoft SQL Server\MSSQL12.SQL14\MSSQL\Log\ERRORLOG

  • Look for default SQL trace files and open the trace files which points to the time of issue. “Log_111.trc” for example.

clip_image002[8]

  • Make a copy of the file in same or different location “log_111 – Copy.trc
  • We can now manually open this file in SQL Profiler and search for keywork “Object:Deleted” or load it to SQL table and use T-SQL query to get the details. Here I’m providing the steps for T-SQL

–To load the trace to SQL table

use tempdb

go

SELECT * INTO trace_table FROM ::fn_trace_gettable(‘C:\Program Files\Microsoft SQL Server\MSSQL12.SQL14\MSSQL\Log\log_111 – Copy.trc’, default)

go

–Get the details of the deleted database

–Change the value of the database name to the one which was deleted

select DatabaseID,DatabaseName,LoginName,HostName,ApplicationName,StartTime from tempdb.dbo.trace_table

where DatabaseName = ‘somedb’ and eventclass =47 — 47 is Object:Deleted Event Class

 

Sample output:

DatabaseID  DatabaseName LoginName  HostName     ApplicationName                                 StartTime

———– ———— ———- ———— ———————————————————————–

26          SomeDB SQL_User1 Client1 Microsoft SQL Server Management Studio – Query 2015-01-14 12:43:46.630

26          SomeDB       SQL_User1 Client1    Microsoft SQL Server Management Studio – Query  2015-01-14 12:43:46.630

(2 row(s) affected)

From the above we can clearly say that  “SQL_User1” user deleted the database from machine “Client1” using SSMS at the above mentioned time.

 

Method – 3

Use the below script to get the details of deleted/dropped databases. We can explicitly specify the name of the database which was deleted or get the output for all databases.

use tempdb

go

 

declare @enable int

select top 1 @enable = convert(int,value_in_use) from sys.configurations where name = ‘default trace enabled’

if @enable = 1 –default trace is enabled

begin

declare @d1 datetime;declare @diff int;declare @indx int ;

declare @curr_tracefilename varchar(500);

declare @base_tracefilename varchar(500);

declare @temp_trace table (obj_name nvarchar(256) collate database_default,database_name nvarchar(256) collate database_default,start_time datetime,

              event_class int,event_subclass int,object_type int,server_name nvarchar(256) collate database_default,login_name nvarchar(256) collate database_default,

              application_name nvarchar(256) collate database_default,ddl_operation nvarchar(40) collate database_default);

 

select @curr_tracefilename = path from sys.traces where is_default = 1 ; set @curr_tracefilename = reverse(@curr_tracefilename)

select @indx  = PATINDEX(‘%\%’, @curr_tracefilename); set @curr_tracefilename = reverse(@curr_tracefilename)

set @base_tracefilename = LEFT( @curr_tracefilename,len(@curr_tracefilename) – @indx) + ‘\log.trc';

 

insert into @temp_trace select ObjectName,DatabaseName,StartTime,EventClass,EventSubClass,ObjectType,ServerName,LoginName,ApplicationName,’temp’

from ::fn_trace_gettable( @base_tracefilename, default ) where EventClass in (46,47,164) and EventSubclass = 0 and DatabaseID <> 2 

———————————————————————————————————————————-

and DatabaseName = ‘SomeDB’ — <<<======Specify the name of the database here, else comment this line to get details of databases

———————————————————————————————————————————-

update @temp_trace set ddl_operation = ‘CREATE’ where event_class = 46

update @temp_trace set ddl_operation = ‘DROP’ where event_class = 47

update @temp_trace set ddl_operation = ‘ALTER’ where event_class = 164

 

select @d1 = min(start_time) from @temp_trace

set @diff= datediff(hh,@d1,getdate())

set @diff=@diff/24;

 

select  start_time as Event_Time,Database_name,Server_name,Login_name,Application_name,

–SQLInstance,

              DDL_Operation from @temp_trace where object_type not in (21587)

order by start_time  desc

 

end

clip_image002[10]

Note:-  The above query is the modified version of query which is executed in the background when we use SQL Server Reports (Method-1)

 

Author:

Raghavendra Srinivasan , Support Engineer, Microsoft India GTSC

Reviewed by:

Balmukund Lakhani, Support Escalation Engineer, Microsoft India GTSC



Error configuring distribution- "Publishing and distribution are supported only in SQL Server version 7.0 or later"

$
0
0

 

I recently installed an instance of SQL Server 2016 on one of my server. Later when I tried to configure the new instance as Distributor I faced the below error:

Error:

clip_image002

——————————
TITLE: Configure Distribution Wizard
——————————
‘Prod\SQL16′ cannot be configured for publishing and distribution. Publishing and distribution are supported only in SQL Server version 7.0 or later.
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&ProdVer=12.0.2000.8&EvtSrc=Microsoft.SqlServer.Management.UI.ConfigureWizardErrorSR&EvtID=ServerTooOld&LinkId=20476
——————————
BUTTONS:
OK
——————————

 

Observations:

  • From the Object Explorer I can clearly see that I’m connected to the latest version of SQL[SQL 2016] but still I’m getting the above error.

 

clip_image002[6]

 

clip_image002[8]

 

  • Tried to configure Distributor using the TSQL script and it successfully executed. After this when I tried to create a new publication I again got the same error. So the issue seems to be only with the GUI.

 

clip_image002[10]

clip_image002[12]

  • I then verified the version of SQL Server management studio which I was using and noticed that it was an older version- SQL 2014 SSMS.

clip_image002[14]

 

 

Solution:

  • I then launched SSMS for SQL Server 2016 and I was able to configure the instance as distributor. As mentioned before we can also use the TSQL commands to configure.

clip_image002[16]clip_image001clip_image001[5]

 

 

  • I faced this issue as I had multiple version of SQL server management studio on the same server

clip_image002[18]

Related Articles:

· Configure Distribution https://msdn.microsoft.com/en-us/library/ms151860.aspx

· How to: Configure Publishing and Distribution (Replication Transact-SQL Programming) https://technet.microsoft.com/en-us/library/ms147363(v=sql.105).aspx

· How to: Configure Publishing and Distribution (SQL Server Management Studio) https://technet.microsoft.com/en-us/library/ms151192(v=sql.105).aspx

· sp_adddistributor (Transact-SQL) https://technet.microsoft.com/en-us/library/ms176028(v=sql.105).aspx

· sp_adddistributiondb (Transact-SQL) https://technet.microsoft.com/en-us/library/ms189755(v=sql.105).aspx

 

Author:

Raghavendra Srinivasan , Support Engineer, Microsoft India GTSC

Reviewed by:

Balmukund Lakhani, Support Escalation Engineer, Microsoft India GTSC

Setting up Service Broker where the initiator database is part of the AG

$
0
0

 

There have been several posts on setting up Service Broker where the database is part of the Availability Group. This is a one stop blog for setting up Service Broker where the initiator database is part of the AG

What happens when the AG fails over? Should there be any additional steps in setting up the service broker so that it resumes its operation after an AG failover? This blog post answers all these questions with details wherever required. The below technet article outlines the steps: https://msdn.microsoft.com/en-IN/library/hh710058.aspx

In this blog, we will discuss about the steps along with the screenshots, wherever applicable.

Requirements:

  1. Service Broker has to be setup and enabled on the database before the database is added to the AG.
  2. Availability Group must possess a listener.
  3. Service Broker endpoints must be configured on every instance of SQL Server that hosts an availability replica for the availability group.
Environment:

AlwaysOn
    Name of Availability Group: AGService
    AG Listener: LIST
    VM1\INS1 : Primary Synchronous Replica
    VM2\INS2 : Secondary Synchronous Replica (Automatic Failover)

Service Broker
    VM1\INS1 : Initiator 1
    VM2\INS2 : Initiator 2
    VM3\INS3 : Target

At the end of this blog, we’ll be setting up the below environment

image

Overview:

‹•›   Setup service broker on the initiator: VM1\INS1 and target: VM3\INS3
‹•›   Take a backup of the Initiator database on VM1\INS1 and restore it with NORECOVERY on the 2nd Initiator (secondary replica) –VM2\INS2
‹•›   Create Service Broker endpoint on the secondary replica: VM2\INS2
‹•›   Create Route on the secondary replica: VM2\INS2
‹•›   Backup the Service Master Key from the primary replica and restore it on the secondary replica.
‹•›   Add the 2 initiator databases (from VM1\INS1 and VM2\INS2) to the AG

Steps:
  1. Setup service broker on the initiator: VM1\INS1 and target: VM3\INS3
    Here, we’ll be setting up Service Broker as we normally do, with initiator as VM1\INS1 and VM2\INS2 as target.
    The T-SQL here to do the same is included, just so there is more clarity on the naming convention that is used in this blog. These scripts have been taken from the below MSDN article:
    https://technet.microsoft.com/en-us/library/bb839483(v=sql.105).aspx

——————————————
        –Lesson 1: Creating the Target Database–
        –***[connect to the TARGET instance- VM3\INS3]***–
——————————————

        –Create a Service Broker endpoint
        USE master;
        GO
        IF EXISTS (SELECT * FROM master.sys.endpoints WHERE name = N’InstTargetEndpoint’)
             DROP ENDPOINT InstTargetEndpoint;
        GO

        CREATE ENDPOINT InstTargetEndpoint
        STATE = STARTED
        AS TCP ( LISTENER_PORT = 4022, LISTENER_IP=ALL )
        FOR SERVICE_BROKER (AUTHENTICATION = WINDOWS );
        GO

        –Create the target database, master key, and user
        USE master;
        GO
        IF EXISTS (SELECT * FROM sys.databases WHERE name = N’TargetDB’)
             DROP DATABASE TargetDB;
        GO

        CREATE DATABASE TargetDB;
        GO
        USE TargetDB;
        GO
        CREATE MASTER KEY
               ENCRYPTION BY PASSWORD = N'<EnterStrongPassword1Here>';
        GO
        CREATE USER TargetUser WITHOUT LOGIN;
        GO

        –Create the target certificate & backup the certificate

        CREATE CERTIFICATE InstTargetCertificate
             AUTHORIZATION TargetUser
             WITH SUBJECT = ‘Target Certificate’,
                  EXPIRY_DATE = N’12/31/3010′;

        BACKUP CERTIFICATE InstTargetCertificate
          TO FILE = N’\\DC\Certshare\InstTargetCertificate_INS3.cer';
        GO

        –Create message type for request and reply
        USE TargetDB;
        GO
        Create message type [//RequestMsg]
        go
        Create message type [//ReplyMsg]
        go

        –Create a contract for the above created message
        USE TargetDB;
        GO
        Create contract [//SampleContract]
        (
        [//RequestMsg] sent by initiator,
        [//ReplyMsg] sent by target
        )

        –Create Queue & Service for target
        USE TargetDB;
        GO
        CREATE QUEUE TargetQueue_TargetDB;
        go
        CREATE SERVICE [//TargetService]
        AUTHORIZATION TargetUser
        ON QUEUE TargetQueue_TargetDB([//SampleContract]);
        GO

Below queries need to be run on the initiator. At this point, we’ll run it only on VM1\INS1 which will be the primary replica on the AG.

      ———————————————
        –Lesson 2: Creating the Initiator Database–
        –***[connect to the INITIATOR instance: VM1\INS1]***–
        ——————————————–

        –Create a Service Broker endpoint
        USE master;
        GO
        IF EXISTS (SELECT * FROM sys.endpoints
                   WHERE name = N’InstInitiatorEndpoint’)
             DROP ENDPOINT InstInitiatorEndpoint;
        GO
        CREATE ENDPOINT InstInitiatorEndpoint
        STATE = STARTED
        AS TCP ( LISTENER_PORT = 4022, LISTENER_IP=ALL )
        FOR SERVICE_BROKER (AUTHENTICATION = WINDOWS );
        GO

        –Create the initiator database, master key, and user
        USE master;
        GO
        IF EXISTS (SELECT * FROM sys.databases
                   WHERE name = N’InitiatorDB’)
             DROP DATABASE InitiatorDB;
        GO
        CREATE DATABASE InitiatorDB;
        GO
        USE InitiatorDB;
        GO

        CREATE MASTER KEY
               ENCRYPTION BY PASSWORD = N'<EnterStrongPassword2Here>';
        GO
        CREATE USER InitiatorUser WITHOUT LOGIN;
        GO

        –Create the initiator certificate

        CREATE CERTIFICATE InstInitiatorCertificate
             AUTHORIZATION InitiatorUser
             WITH SUBJECT = N’Initiator Certificate’,
                  EXPIRY_DATE = N’12/31/3010′;

        BACKUP CERTIFICATE InstInitiatorCertificate
          TO FILE =
        N’\\dc\certshare\InstInitiatorCertificate_INS1.cer';
        GO

        –Create message type for request and reply
        USE InitiatorDB;
        GO
        Create message type [//RequestMsg]
        go
        Create message type [//ReplyMsg]
        go

        –Create a contract for the above created message
        USE InitiatorDB;
        GO
        Create contract [//SampleContract]
        (
        [//RequestMsg] sent by initiator,
        [//ReplyMsg] sent by target
        )

        –Create the initiator queue and service

        CREATE QUEUE InitiatorQueue_InitiatorDB;

        CREATE SERVICE [//InitiatorService]
               AUTHORIZATION InitiatorUser
               ON QUEUE InitiatorQueue_InitiatorDB;
        GO

        –Create references to target objects
        CREATE USER TargetUser WITHOUT LOGIN;

        CREATE CERTIFICATE InstTargetCertificate
           AUTHORIZATION TargetUser
           FROM FILE =
        N’\\DC\certshare\InstTargetCertificate_INS3.cer’
        GO

        –Create routes

        DECLARE @Cmd NVARCHAR(4000);

        SET @Cmd = N’USE InitiatorDB;
        CREATE ROUTE Initiator_TO_Target_Route
        WITH SERVICE_NAME =N”//TargetService”,
             ADDRESS = N”TCP://VM3:4022”;';

        EXEC (@Cmd);

        SET @Cmd = N’USE msdb
        CREATE ROUTE Inst_Local_InitiatorRoute
        WITH SERVICE_NAME =
               N”//InitiatorService”,
             ADDRESS = N”LOCAL”';

        EXEC (@Cmd);
        GO
        CREATE REMOTE SERVICE BINDING TargetBinding
              TO SERVICE
                 N’//TargetService’
              WITH USER = TargetUser;

        GO 

Now, on the target: VM3\INS3, run the below. Note that I am using the routing address of VM1 while creating a Target_To_Initiator Route. We’ll change that to name of the AG listener after we test if the service broker is sending and receiving messages between the VM1\INS1 and VM3\INS3( Without AG in picture)

       ——————————————————-
        –Lesson 3: Completing the Target Conversation Objects
        –***[connect to the TARGET instance VM3\INS3]***–
        ——————————————————-

        –Create references to initiator objects
        USE TargetDB
        GO
        CREATE USER InitiatorUser WITHOUT LOGIN;

        CREATE CERTIFICATE InstInitiatorCertificate
           AUTHORIZATION InitiatorUser
           FROM FILE = N’\\DC\certshare\InstInitiatorCertificate_INS1.cer';
        GO

        –Create routes

        DECLARE @Cmd NVARCHAR(4000);

        SET @Cmd = N’USE TargetDB;
        CREATE ROUTE Target_To_Initiator_Route
        WITH SERVICE_NAME =
               N”//InitiatorService”,
             ADDRESS = N”TCP://VM1:4022”;';

        EXEC (@Cmd);

        SET @Cmd = N’USE msdb
        CREATE ROUTE Inst_Local_TargetRoute
        WITH SERVICE_NAME =
                N”//TargetService”,
             ADDRESS = N”LOCAL”';

        EXEC (@Cmd);
        GO
        GRANT SEND
              ON SERVICE::[//TargetService]
              TO InitiatorUser;
        GO
        CREATE REMOTE SERVICE BINDING InitiatorBinding
              TO SERVICE N’//InitiatorService’
              WITH USER = InitiatorUser;
        GO

 

This is the basic Service Broker setup between VM1\INS1 and VM3\INS3. You can check by sending and receiving messages using the below scripts.

               ————————————————–
        –Lesson 4: Beginning the Conversation–
        –***[connect to the INITIATOR instance VM1\INS1]***–
        ————————————————–

        USE InitiatorDB;
        GO
        DECLARE @InitDlgHandle UNIQUEIDENTIFIER;
        DECLARE @RequestMsg NVARCHAR(100);
        BEGIN TRANSACTION;
        BEGIN DIALOG @InitDlgHandle
        FROM SERVICE [//InitiatorService]
        TO SERVICE N’//TargetService’
        ON CONTRACT [//SampleContract]WITH ENCRYPTION = Off;
        SELECT @RequestMsg = N'<RequestMsg:1>Message for Target service.</RequestMsg:1>';
        SEND ON CONVERSATION @InitDlgHandle
        MESSAGE TYPE [//RequestMsg]
        (@RequestMsg);
        SELECT @RequestMsg AS SentRequestMsg;

        COMMIT TRANSACTION;
        GO

Once the message is sent from the initiator, it can be seen by querying the Target Queue on the target database.

Select *from [dbo].[TargetQueue_TargetDB]

If it is not present on the Target queue, query the sys.transmission_queue on the initiator. If there are any errors, it will be shown on the transmission_status column.
Now, run the below on the target instance to receive the request and send a reply.

 

       ————————————————–
        –Lesson 5: Receiving a Request and Sending a Reply–
        –***[connect to the TARGET instance VM3\INS3]***–
        ————————————————–

        –Receive the request and send a reply
        USE TargetDB;
        GO

        DECLARE @RecvReqDlgHandle UNIQUEIDENTIFIER;
        DECLARE @RecvReqMsg NVARCHAR(100);
        DECLARE @RecvReqMsgName sysname;

        BEGIN TRANSACTION;

        WAITFOR
        ( RECEIVE TOP(1)
            @RecvReqDlgHandle = conversation_handle,
            @RecvReqMsg = message_body,
            @RecvReqMsgName = message_type_name
          FROM [dbo].[TargetQueue_TargetDB]
        ), TIMEOUT 1000;

        SELECT @RecvReqMsg AS ReceivedRequestMsg;

        IF @RecvReqMsgName = N’//RequestMsg’
        BEGIN
             DECLARE @ReplyMsg NVARCHAR(100);
             SELECT @ReplyMsg =
                N'<ReplyMsg:1>Message for Initiator service.</ReplyMsg:1>';

             SEND ON CONVERSATION @RecvReqDlgHandle
                  MESSAGE TYPE [//ReplyMsg]
                  (@ReplyMsg);

             END CONVERSATION @RecvReqDlgHandle;
        END

        SELECT @ReplyMsg AS SentReplyMsg;

        COMMIT TRANSACTION;
        GO

 

Make sure this message appears in the initiator queue on VM1\INS1

End the conversation by running the below on the initiator: VM1\INS1:

 

       ————————————————–
        –Lesson 6: Receiving the Reply and Ending the Conversation–
        –***[connect to the INITIATOR instance VM1\INS1]***–
        ————————————————–

        –Receive the reply and end the conversation
        USE InitiatorDB;
        GO

        DECLARE @RecvReplyMsg NVARCHAR(100);
        DECLARE @RecvReplyDlgHandle UNIQUEIDENTIFIER;

        BEGIN TRANSACTION;

        WAITFOR
        ( RECEIVE TOP(1)
            @RecvReplyDlgHandle = conversation_handle,
            @RecvReplyMsg = message_body
          FROM [dbo].[InitiatorQueue_InitiatorDB]
        ), TIMEOUT 1000;

        END CONVERSATION @RecvReplyDlgHandle;

        — Display recieved request.
        SELECT @RecvReplyMsg AS ReceivedReplyMsg;

        COMMIT TRANSACTION;
        GO

 

2.  Take a Full database backup and transaction log backup of the Initiator database on VM1\INS1 and restore them with NORECOVERY on the 2nd Initiator (secondary replica) –VM2\INS2. 

3.  Create an availability group named AGService and add the database: InitiatorDB on VM1\INS1 and VM2\INS2 to it with VM1\INS1 as the primary replica and VM2\INS2 as the Synchronous secondary replica. Also, create an AG listener named: LIST.

4.  Now, failover the AG to VM2\INS2 Replica. After the failover, check if Service Broker is enabled on the database we restored on VM2\INS2. The is_broker_enabled bit should be set to 1.

select is_broker_enabled, *from sys.databases where name=’InitiatorDB’

5.  Create the Service Broker endpoint on the replica: VM2\INS2

USE MASTER
GO
CREATE ENDPOINT InstInitiatorEndpoint
STATE = STARTED
AS TCP ( LISTENER_PORT = 4022, LISTENER_IP=ALL )
FOR SERVICE_BROKER (AUTHENTICATION = WINDOWS );
GO

 

6.  Create the local Route on the MSDB of the secondary replica: VM2\INS2

        USE msdb
        GO
        CREATE ROUTE Inst_Local_InitiatorRoute
        WITH SERVICE_NAME =
        N’//InitiatorService’,
        ADDRESS = N’LOCAL’

7.  Backup the Service Master Key from the primary replica VM1\INS1 and restore it on the secondary replica VM2\INS2

BACKUP SERVICE MASTER KEY TO FILE=’\\DC\CertShare\SMKVM1′ encryption by password=’IamStrong!’

RESTORE SERVICE MASTER KEY FROM FILE=’\\DC\CertShare\SMKVM1′ decryption by password=’ IamStrong!’ FORCE

–Run this on VM3\INS3
USE [TARGETDB]
GO
ALTER ROUTE [Target_To_Initiator_Route]
WITH SERVICE_NAME =
       N’//InitiatorService’,
     ADDRESS = N’TCP://LIST:4022’

8.  Now, we’ll change the Target_To_Initiator_Route to use the AG Listener. This will make sure Service Broker works even after an AG failover.

–Run this on VM3\INS3
USE [TARGETDB]
GO
ALTER ROUTE [Target_To_Initiator_Route]
WITH SERVICE_NAME =
       N’//InitiatorService’,
     ADDRESS = N’TCP://LIST:4022’

You can test the service broker functionality now between the new initiator: VM2\INS2 and target: VM3\INS3 by running the queries in step 1 (only from Lesson 4 to Lesson 6).

Author:

Prabhjot Kaur, Support Engineer, Microsoft India GTSC

Reviewed by:

Balmukund Lakhani, Support Escalation Engineer, Microsoft India GTSC


SQL Server Service fails to start after applying patch. Error: CREATE SCHEMA failed due to previous errors.

$
0
0

In this post we would like to explain one of the interesting issues that we encountered while upgrading a SQL Server Instance.

Symptoms

SQL Server Service fails to start after applying SQL patch due to misconfiguration in MSDB.

Error:-

2016-06-28 19:23:41.22 spid5s    Script level upgrade for database ‘master’ failed because upgrade step ‘msdb110_upgrade.sql’ encountered error 2714, state 6, severity 25. Severity: 16, State: 0.
2016-06-28 19:23:41.22 spid5s    CREATE SCHEMA failed due to previous errors.
2016-06-28 19:23:41.22 spid5s    Error: 912, Severity: 21, State: 2.
2016-06-28 19:23:41.22 spid5s    Script level upgrade for database ‘master’ failed because upgrade step ‘msdb110_upgrade.sql’ encountered error 2714, state 6, severity 25. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the ‘master’ database, it will prevent the entire SQL Server instance from starting. Examine the previous error log entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.

2016-06-28 19:23:41.22 spid5s    Error: 3417, Severity: 21, State: 3.
2016-06-28 19:23:41.22 spid5s    Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.

2016-06-28 19:23:41.22 spid5s    SQL Server shutdown has been initiated·

Cause

  • The upgrade script [msdb110_upgrade.sql] executes during the first restart of SQL after the service pack installation.
  • This script hits an exception during recreation of database role called “DatabaseMailUserRole”.
  • This is due to the fact that the schema named “DatabaseMailUserRole” was owned by other database role then “DatabaseMailUserRole” – DBO role in our case.

Resolution

1. Start the SQL server service using Trace flag -T902 ( 902 : Used to skip execution of any scripts during SQL Startup)

clip_image002

Or follow these steps:

· Open SQL Server Configuration Manager.

· In SQL Server Configuration Manager, click SQL Server Services.

· Double-click the SQL Server service.

· In the SQL Server Properties dialog box, click the Advanced tab.

· On click the Advanced tab, locate the Startup Parameters item.

· Add ;-T902 to the end of the existing string value, and then click OK.

· Restart the SQL server service

2. Connect to the SQL instance and backup the MSDB database

3. Manually delete the schema named “DatabaseMailUserRole

Management studio > Expand MSDB Database > Go to Security > Schemas > Look for DatabaseMailuserRole

clip_image004

4. Now delete the schema named DatabaseMailuserRole.

5. Restart the SQL Server service.

clip_image006

More information

  • Starting SQL Server 2008 onwards, whenever we upgrade or apply a patch on SQL, it upgrades only the binaries and not the database and its objects.
  • Once the upgrade completes and the service restarts for the first time, it starts the database upgrade using script msdb110_upgrade.sql which is located under

C:\Program Files\Microsoft SQL Server\MSSQLXX.YYYY\MSSQL\Install\

XX : SQL Version 
SQL 2008/R2  >10
SQL 2012       >11
SQL 2014       > 12

————————————————————–
— Database Mail roles and permissions
————————————————————–
— Create the DatabaseMailUserRole role
IF (EXISTS (SELECT *
FROM msdb.dbo.sysusers
WHERE (name = N’DatabaseMailUserRole’)
AND (issqlrole = 1)))
BEGIN — there are no members in the role, then drop and re-create it
IF ((SELECT COUNT(*)
FROM msdb.dbo.sysusers   su,
            msdb.dbo.sysmembers sm
WHERE (su.uid = sm.groupuid)
AND (su.name = N’DatabaseMailUserRole’)
AND (su.issqlrole = 1)) = 0)
BEGIN
EXECUTE msdb.dbo.sp_droprole @rolename = N’DatabaseMailUserRole’  
EXECUTE msdb.dbo.sp_addrole @rolename = N’DatabaseMailUserRole’  << **************Point of failure
END
END
ELSE
EXECUTE msdb.dbo.sp_addrole @rolename = N’DatabaseMailUserRole’

  • The command “SP_ADDROLE” fails as it also creates schema named “DatabaseMailUserRole” which already exists in the MSDB database.
  • The previous command “SP_DROPROLE” was unable to delete the schema “DatabaseMailUserRole” as it was owned by other database role– DBO role in our case.

Steps to repro the error:-

1. Change the schema owner of “DatabaseMailUserRole” to DBO

USE [msdb]
GO
ALTER AUTHORIZATION ON SCHEMA::[DatabaseMailUserRole] TO [dbo]
GO

2. Try to execute the below statement  and we hit the same error:

EXECUTE msdb.dbo.sp_droprole @rolename = N’DatabaseMailUserRole’  
EXECUTE msdb.dbo.sp_addrole @rolename = N’DatabaseMailUserRole’  <<<**************Point of failure

clip_image008

Related articles:


Written by
:
Ujjwal Patel, Support Engineer, SQL Server Support
Reviewed by:
Raghavendra Srinivasan, Support Engineer, SQL Server Support

Creating and Registering SSL Certificates

$
0
0

A few days back, I was working with one of our partners who had a requirement of creating a SSL self-signed certificate through MMC console. As we are already aware that it is a complex and a tedious procedure, tried developing a script to ease the task for us. Also found that there were a lot of partners asking for assistance in having a script based approach to create the certificates.
Tried finding a way out by looking through various discussion forums which yielded nothing, but queries to build a script to accomplish the task. Addressing this requirement of the partner pool, here is the blog explaining the script based way of creating the Self-signed certificates and registering them meeting the pre-requisites of SQL server.
By developing the script based way of creating the certs, it is just at the run of a command we will get the SSL self-signed certificates created and ready to be registered. Along with the creation of the certificate, this blog also explains the different ways of registering those certificates.

 

Scenario 1:

I will be creating a SSL self-signed certificate using the following 3 methods:

  • Using Makecert util from the SDK.
  • Using certreq command and a script.sine
  • Using powershell command.

Steps to be followed:

  1. Using Makecert util:
  • Firstly, the pre-requisite for using this method is to have Windows SDK installed on the machine.
  • Navigate to the location where you have the makecert util and then Run the below command from elevated CMD prompt:
  • Run the following command to create the certificate:

makecert -r -pe -n "CN=MININT-Q99PLQN.fareast.corp.microsoft.com" -b 10/16/2015 -e 12/01/2020 -eku 1.3.6.1.5.5.7.3.1 -ss my -sr localMachine -sky
exchange -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12

1

  • We will have the certificate created under the MMC console –> Certificate snap in –> Local Computer –> Personal section
  • As per the parameters specified, the certificate will be created with the following set of specifications:
    • The common name of the certificate will be “MININT-Q99PLQN.fareast.corp.microsoft.com” which is the FQDN of the machine.
    • The private key will be enabled for exporting.
    • Certificate will be created in the Computer account -> Personal -> Certificate store
    • Validity period will be 10-16-2015 to 12-01-2020
    • The server authentication will be enabled. [eku = 1.3.6.1.5.5.7.3.1]
    • Key Spec value will be set to 1. [AT_KEYEXCHANGE will be enabled]
    • The algorithm used here for encryption is Microsoft RSA SChannel Cryptographic Provider.

2. Using Certreq command:

  • Firstly, we need to save the below script in a text document with a .inf extension.

[Version]
Signature = "$Windows NT$"
[NewRequest]
Subject = "CN = MININT-Q99PLQN.fareast.corp.microsoft.com"
FriendlyName = test1.contoso.com
MachineKeySet = true
RequestType=Cert
;SignatureAlgorithm = SHA256
KeyLength = 4096
KeySpec = 1
KeyUsage = 0xA0
MachineKeySet = True
Exportable = TRUE
Hashalgorithm = sha512
ValidityPeriod = Years
ValidityPeriodUnits = 10
[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1

  • Navigate to the location where you have saved this request.inf file and then Run the below command from elevated CMD prompt

Certreq -new -f .inf .cer

  • We will have the certificate created under the MMC console –> Certificate snap in –> LocalComputer –> Personal section
  • The advantages of this technique is that it does not require the Windows SDK installed and the key length can be subjected to changes where as if it is using makecert it would be by default set to ‘2048’ for ‘RSA’ and ‘512’ for ‘DSS’

2

3. Using Power-shell command

  • Here is the approach to create the SSL certificate satisfying the pre-requisites to load it for SQL server using the power-shell command.
  • Run Powershell as an administrator and enter the following command (where DnsName = Host name or FQDN of the machine)

New-SelfSignedCertificate -DnsName MININT-Q99PLQN.fareast.corp.microsoft.com -CertStoreLocation cert:\LocalMachine\My -FriendlyName test99 -KeySpec KeyExchange
31

Scenario: 2

I will be registering the SSL self-signed certificate using the following 2 methods:

  • Through the SQL Server Configuration Manager
  • Through explicit registration

Steps to be followed:

  1. Through SQL Server Configuration Manager:
  • Initially need to check the health of the certificate using the CheckSQLssl.exe tool.
  • Here are the pre-requisites for the SSL certificate to use it for SQL server:
    • Certificate must be present in the Local computer certificate store or the current user certificate store.
    • Certificate age must be present within the validity period.
    • Certificate must be meant for server authentication. (EKU should specify Server Authentication [1.3.6.1.5.5.7.3.1])
    • Certificate must be created using the KEY_SPEC option of AT_KEYEXCHANGE (KEY_SPEC=1)
    • Common name of the certificate should be the host name or the FQDN of the server computer.
    • Running the tool using the command prompt will generate the following report

4

  • On getting all the validation checks ‘OK’ regarding the pre-requisites of the certificate we can go ahead register it.
  • On SSCM, expand SQL server network configuration -> Right click on ‘Protocols for <Instance name> -> Properties. Turn the ‘Forced Encryptionto Yes.

5

  • Click on the ‘Certificate’ tab where the certificates will be listed and select the required certificate from the list and restart the service.

6

  • Thus the SSL certificate will be loaded to the selected SQL server and this can be verified by analyzing the SQL error logs for the below message and verifying it with the thumbprint of the certificate in MMC.

The certificate [Cert Hash(sha1) "BFB714872C7B2CD761ADEB1893BFC99581D3420B"] was successfully loaded for encryption.

  • To verify the thumbprint, in MMC double click on the certificate which is loaded, click on ‘Details’ tab and click on thumbprint in the list.

7

2. Through explicit registration:

  • Even after the validation checks are proved to be OK by the CheckSQLssl tool and still if the certificate is not listed in SSCM, then follow this technique.
  • Run ‘regedit’ and open HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQLServer\SuperSocketNetLib and enter the thumbprint of the certificate without spaces to the ‘Certificate’ value.
  • Note that in case of a clustered environment in those nodes whose FQDN does not match with the certificate name, the certificate will not be listed in the configuration manager. In that case explicit registration is the only way to register the certificate.
  • Then on restarting the SQL service the SSL certificate will be loaded to SQL and this can be verified again by analyzing the SQL server error logs.

 

 

Written by:
Shreyas R, Support Engineer, SQL Server Support
Reviewed by:
Sravani Saluru, Support Escalation Engineer, SQL Server Support
Pradeep M M, Escalation Engineer, SQL Server Support

Troubleshooting SSL on SQL Server Issue – AT_KEYEXCHANGE is not set

$
0
0

It is one of the pre-requisites for the KEY-SPEC value of the SSL self-signed certificate to be set to 1, for it to be loaded to the SQL server. Due to the glitches during the creation of the certificate there may be scenarios arising where in this value might not be set to 1. If this happens to be 0 or 2, then the AT_KEYEXCHANGE parameter of the certificate is not said to be set. Once when I was working with one of my partner we encountered the exact issue of AT_KEYEXCHANGE not set or the KEY-SPEC value of the SSL certificate used at their end is not set to 1. I also found that, this is often a question that has been asked a lot of times in the various forums and discussions. On trying to find if there were any resolutions, couldn’t find any blogs on the internet to help us resolve this issue.

As a result of which I researched on the same and came up with a couple of action plans by reproducing the issue at my end by intentionally creating certificate violating the KEY_SPEC parameter value and blogged the resulted observations which will help us resolve the issue and thus instead of creating a new certificate, the existing certificate can be re-used.

Scenario:

I will be creating a SSL self-signed certificate with the KEY_SPEC not set to 1 and explain how to correct it and make the certificate eligible to load it to SQL server.

Approaches:

  1. By creating a ‘PSEUDO’ REGISTRY KEY
  • Firstly, I have created a faulty SSL certificate with the KEY_SPEC not set to 1.
  • If in case any assistance is needed to create the certificate refer to my previous blog wherein I have explained the different ways of creating the SSL certificate.
  • On creating the certificate, running the checkSQLssl tool on the server gives the following validation report for the certificate created.

1

  • Now to address this issue, create a DWORD value called “NoProviderName” and set it to 1 at the path HKCU\Software\Microsoft\Windows\CurrentVersion\PFX.
  • Export the certificate in the .PFX format and deleted it from the store through the MMC.
  • To export, Right click on the certificate -> All Tasks -> Export and the Certificate Export Wizard appears, click on NEXT to continue.
  • Note that for the certificate to be exported from the MMC, initially during its creation itself, the private key should be made exportable. If not the Yes, export the private key radio button as shown will be greyed out and that will be the last nail in the coffin.
  • In the Export Private Key page click on the Yes, export the private key radio button as shown and click on NEXT to continue.

2

  • In the Export File Format page, make sure radio button for .PFX format is checked as shown and click on NEXT to continue.

3

  • In the Security page, check the Password check box, enter a password and confirm it which will be needed during the re-importing of the certificate. Click on NEXT to continue.
  • In the File to Export page browse to the physical location where you need the certificate to be exported on the machine. Click on NEXT and follow it by clicking on FINISH to complete the export process.
  • Now navigate to the location where you have exported the certificate and then run the below command from elevated CMD prompt:

      Certutil -importpfx <PFXFILENAME.pfx> AT_KEYEXCHANGE

where PFXFILENAME is the file name of the exported certificate.

4

Now the issue of the certificate will be resolved and can be verified by running the checkSQLssl tool again and the certificate will be eligible now to be loaded to SQL server.

5

2. Without creating any REGISTRY KEY:

 

Say for instance, due to security or the product functionality concerns if the ‘PSEUDO’ registry key creation is not permissible, then this approach can be used where in the course of action remains almost the same as the previous one except for the fact that there is no requirement of any registry key to be created.

  • Even here the procedure remains almost the same.
  • We have to export the certificate and then we have to re-import it but there is no need of a registry key creation in this case.
  • Re-import the certificate using the following command:

certutil -csp “Microsoft Strong Cryptographic Provider” – importpfx <PFXFILENAME.pfx>

6

  • Thus the issue with the certificate will now be resolved and thus it will be now eligible to be loaded to SQL server.
Written by:
Shreyas R, Support Engineer, SQL Server Support
Reviewed by:
Sravani Saluru, Support Escalation Engineer, SQL Server Support
Pradeep M M, Escalation Engineer, SQL Server Support

Error: Could not deploy package. Unable to connect to target server.

$
0
0

In this post we would like to explain one of the interesting issue that we encountered while deploying a DACPAC from sqlpackage.exe.

Symptoms

Cannot Deploy DACPAC Extracted from SQL 2012 Server from .NET custom code or from SQLPackage.exe command to SQL 2014

image

C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin>SqlPackage.exe /Action:Publish /SourceFile:”C:\temp\AgentLink2_11.0.6020.dacpac” /tsn:”RAGHAVSDC” /TargetDatabaseName:TestACM

Publishing to database ‘TestACM’ on server ‘RAGHAVSDC’.
The dac history table will not be updated.
Initializing deployment (Start)
Initializing deployment (Failed)
*** Could not deploy package.
Unable to connect to target server.

Cause

We don’t have a DAC Folder at location C:\Program Files (x86)\Microsoft SQL Server\120\ in the system but have the folder C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin (We can successfully publish to SQL 2012 but not SQL 2014)

Resolution

To reproduce the issue, please find a DAC folder at location C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin

Open a CMD with administrator privileges and navigate to this path and run the sqlpackage.exe to publish it to a SQL 2014/2016 server and we will get the same exact error

“*** Could not deploy package.

Unable to connect to target server.”

The above looks like to be a connectivity error at our first glance but this is not the case here. We tested the connectivity for this on multiple machines and didn’t find an issue with it. The solution to the problem is we need to install the DAC Framework https://www.microsoft.com/en-in/download/details.aspx?id=42293 and once installed, we will be able to see the DAC Folder at C:\Program Files (x86)\Microsoft SQL Server\120\DAC\

We then can try to publish the DACPAC from sqlpackage.exe from the 120 Location and it gets published successfully.

image

More Information:

In the above scenario, we noticed that we can only publish the DACPAC for the version the DACPAC file was created for.

If we have taken a DACPAC for SQL 2012, then we can publish is to any higher version of SQL but it needs to be published from the 120 folder (C:\Program Files (x86)\Microsoft SQL Server\120\DAC\Bin) if we want to publish to SQL 2014. If we are trying to publish the DACPAC taken from SQL 2012 to 2016 then we need to publish the package from the 130 Folder (C:\Program Files (x86)\Microsoft SQL Server\130\DAC\Bin)

DACPAC is a feature of our Data Tier application which will allow us to backup the schema of our database. In simple terms, it is only database schema (definition without the data) which can be used on higher versions of SQL Server. SQLPackage.exe is a utility which allows us to automate database development and projects in our environment.

Related articles:

SQLPackage.exe: https://msdn.microsoft.com/en-us/library/hh550080(v=vs.103).aspx

Data Tier Applications: https://msdn.microsoft.com/en-us/library/ee210546.aspx

Design and Implementation for DACPAC: https://technet.microsoft.com/en-us/library/ee210546(v=sql.110).aspx

DAC Framework download: https://www.microsoft.com/en-in/download/details.aspx?id=42293

Written by – Ujjwal Patel, Support Engineer.
Reviewed by – Raghavendra, , Sr. Support Engineer.

Automated backups configuration fails when configured from Azure portal

$
0
0

In this post, we would like to explain one of the interesting issues that we encountered while using the automated backup feature for a VM from the azure portal (We can find the option once we click on VM > SQL Server Configuration> Automated Backup)

Symptoms

Cannot Configure Automated backups in azure VM from the portal which was created on ARM (Azure Resource Manager Model). It fails with the following error

• TYPE
Microsoft.Compute/virtualMachines/extensions
• RESOURCE ID
/subscriptions/6c28b945-6d98-403d-8936-5e658f228a0f/resourceGroups/Group/providers/Microsoft.Compute/virtualMachines/LTO-CT-SQL/extensions/SqlIaasExtension
• STATUSMESSAGE
{ "status": "Failed", "error": { "code": "ResourceDeploymentFailure", "message": "The resource operation completed with terminal provisioning state 'Failed'.", "details": [ { "code": "VMExtensionHandlerNonTransientError", "message": "Handler 'Microsoft.SqlServer.Management.SqlIaaSAgent' has reported failure for VM Extension 'SqlIaasExtension' with terminal error code '1009' and error message: 'Enable failed for plugin (name: Microsoft.SqlServer.Management.SqlIaaSAgent, version 1.2.10.0) with exception Command C:\\Packages\\Plugins\\Microsoft.SqlServer.Management.SqlIaaSAgent\\1.2.10.0\\enable.cmd of Microsoft.SqlServer.Management.SqlIaaSAgent has exited with Exit code: 255'" } ] } }
• RESOURCE
LTO-CT-SQL/SqlIaasExtension
• OPERATION ID
B3B967D4EF42741A

Cause

SQL IAAS Agent Service was disabled and dint starts due to insufficient permissions.

Resolution

We can reproduce the issue by the following method.

We deployed a VM on our end and navigated to VM > SQL Server Configuration> Automated Backup and this failed with a similar error:

automatedbackup

Error", "message": "Handler 'Microsoft.SqlServer.Management.SqlIaaSAgent' has reported failure for VM Extension 'SqlIaasExtension' with terminal error code '1009' and error message: 'Enable failed for plugin (name: Microsoft.SqlServer.Management.SqlIaaSAgent, version 1.2.10.0) with exception
statusMessage:{"status":"Failed","error":{"code":"ResourceDeploymentFailure","message":"The resource operation completed with terminal provisioning state 'Failed'.","details":[{"code":"VMExtensionHandlerNonTransientError","message":"Handler 'Microsoft.SqlServer.Management.SqlIaaSAgent' has reported failure for VM Extension 'SqlIaasExtension' with terminal error code '1009' and error message: 'Enable failed for plugin (name: Microsoft.SqlServer.Management.SqlIaaSAgent, version 1.2.10.0) with exception Command C:\\Packages\\Plugins\\Microsoft.SqlServer.Management.SqlIaaSAgent\\1.2.10.0\\enable.cmd of Microsoft.SqlServer.Management.SqlIaaSAgent has exited with Exit code: -532462766'"}]}}

We then went to the VM and checked the event viewer application and system logs and found the below errors:

The Microsoft SQL Server IaaS Agent service failed to start due to the following error:
The service did not start due to a logon failure.

The SQLIaaSExtension service was unable to log on as NT Service\SQLIaaSExtension with the currently configured password due to the following error:
Logon failure: the user has not been granted the requested logon type at this computer
.

Service: SQLIaaSExtension
Domain and account: NT Service\SQLIaaSExtension

This service account does not have the required user right “Log on as a service.”

The above clearly indicates that SQLIAASEXTENSION account needs to have the permissions in security policy.

We went to Run> Secpol.msc> Under Security settings looked for Local Policies > User Rights Assignment > Log on as a service (In right pane) >Right click and go to its properties and this account with Admin permissions.

secpol-msc

We then again tried to create automated backup and dint see the error in event viewer.
Now to figure out where is this account used, we looked at services.msc and found the account is used by Microsoft SQL IAAS Agent service.
We saw the service is in a stopped state.

services-msc

Now researching on this, we found SQL Server IAAS Agent service can help us to automate some administrative tasks, for example run jobs, monitors SQL Server, and processes alerts. When we enable Automated Backup on virtual machine, the extension will be automatically installed but somehow it didn’t start in our scenario due to the account permission issues.

We started the service and then tried to configure the automated backups from the azure portal and saw it completed successfully without any errors. In case if it still fails even after that then the next step is to Look at the locations C:\WindowsAzure\Logs and C:\Packages\Plugins for any errors in the IAAS VM.

More Information:

We require the Microsoft SQL IAAS Agent service in running state for enabling automated backups and its functioning. When we enable Automated Backup on our virtual machine, the extension will be automatically installed.
Automated Backup automatically configures Managed Backup to Microsoft Azure for all existing and new databases on an Azure VM running SQL Server 2014 Standard or Enterprise. This enables us to configure regular database backups that utilize durable Azure blob storage. Automated Backup depends on the SQL Server IaaS Agent Extension.

Related articles:

More information on Automated Backups: https://azure.microsoft.com/en-in/documentation/articles/virtual-machines-windows-sql-automated-backup/

More information on IAAS Agent Service: https://azure.microsoft.com/en-in/documentation/articles/virtual-machines-windows-sql-server-agent-extension/

 

Written by:
Ujjwal Patel, Support Engineer, SQL Server Support

Reviewed by:
Raghavendra Srinivasan, Sr. Support Engineer, SQL Server Support

 

Viewing all 21 articles
Browse latest View live