Speaking at SQLGrillen 2018

[German version below]

Hello #SQLFamily,

as previously announced I was selected to speak in the Newcomer Track of SQL Grillen 2018.

My first ever session on a SQL Server conference was in German and called “Mission SQL Migration – Aus Blech wird VM” and was about migrating physical SQL Server to virtual ones.
The important parts were the VMware architecture, guest configuration and how to migrate the whole SQL Server with one single command.

About 20 people were in the room and the session went smooth. Also the demo on a remote server in our company network works like a charme. Because of some deeper discussions about the need of virtualization and the pros and cons, I was not able to show some more of the fantastic dbatools commands I’ve prepared.
So only the Start-DbaMigration was shown and the “Best Practices” commands like Test-DbaMaxMemory, Test-DbaMaxDop, Test-DbaTempDBConfiguration (…) and the important Invoke-DbaDatabaseUpgrade for upgrading the migrated Databases to the last compatibility level were not shown.

But at the end I finished just in time and I was happy how it works.
I’ve got some direct Feedback from friends and also my mentor Björn Peters (t) who had helped me a lot preparing the session. Thanks a lot!

Hopefully I’ll be able to present on other SQL conferences in the future.

Here is the complete (german) presentation: Mission SQL Migration – Aus Blech wird VM 2018-06-20

Thanks for reading,
Volker


[German]

Hallo # SQLFamily,

Wie bereits angekündigt, wurde ich ausgewählt, im Newcomer Track von SQL Grillen 2018 zu sprechen.

Meine allererste Session auf einer SQL Server-Konferenz war auf Deutsch und hieß “Mission SQL Migration – Aus Blech wird VM”. Es ging darum, physikalsche SQL Server auf virtuelle SQL Server zu migrieren.
Die wichtigsten Teile waren die VMware-Architektur, die Guest-Konfiguration und die Migration des gesamten SQL-Servers mit einem einzigen Befehl.

Ungefähr 20 Leute waren im Raum und die Session lief glatt. Auch die Demo auf einem Remote-Server in unserem Firmennetzwerk funktionierte reibungslos. Aufgrund einiger tiefer gehender Diskussionen über die Notwendigkeit von Virtualisierung und die Vor- und Nachteile, konnte ich einige der fantastischen dbatools-Befehle, die ich vorbereitet hatte, nicht mehr zeigen.
Es wurde also leider nur die eigentliche Migration mit dem Kommando Start-DbaMigration gezeigt und die “Best Practices” -Befehle wie Test-DbaMaxMemory, Test-DbaMaxDop, Test-DbaTempDBConfiguration (…) und das wichtige Invoke-DbaDatabaseUpgrade für das Upgrade der migrierten Datenbanken auf die letzte Kompatibilitätsstufe wurden aus Zeitmangel leider nicht mehr gezeigt.

Aber am Ende war ich gerade rechtzeitig fertig und ich war glücklich, wie es geklappt hat. Ich habe ein direktes, positives Feedback von Freunden und auch von meinem Mentor Björn Peters (t) erhalten, der mir bei der Vorbereitung der Session sehr geholfen hat. Vielen Dank nochmal dafür!

Ich hoffe auch auf zukünftigen SQL Konferenzen noch als Sprecher vortragen zu können um meine Erkenntnisse und Erfahrungen mit der SQL Community zu teilen.

Hier findet sich noch die Präsentation: Mission SQL Migration – Aus Blech wird VM 2018-06-20

Vielen Dank fürs Lesen,
Volker

dbWarden – another change in the sp_helpdistributor with SQL Server 2017 CU7 or 6

Another change in the sp_helpdistributor Stored Procedure in CU7 or CU6 of SQL 2017 requires a change in the dbWarden rpt_HealthReport Stored Procedure.
I’ve upgraded from CU5 to CU7, so one of the last two CU changed the System SP again.

Additional to the both that are described in a previous article there needs to be one more parameter to be added before calling the SP:

dist_listener NVARCHAR(200)

After that the Health Report works like before.

Update: with new Replication there is another error.
One more field is missing in the #PUBINFO Temporary table in the rpt_HealtReport
Add publisher NVARCHAR(128) at the end of the temp table and then call sp_replmonitorhelppublication.

Thanks for reading.

Regards,
Volker

Ressources:

Ich spreche @SQLGrillen 2018 – Thema: Mission SQL Migration – Aus Blech wird VM

Zum ersten Mal werde ich bei einer SQL Server Konferenz sprechen!!!
Am kommenden Freitag (22.6.2018) gibt es beim SQLGrillen in Lingen (Ems) einen Newcomer Track. Diesen werde ich direkt um 9:00 Uhr mit meinem Vortrag beginnen:
Mission SQL Migration – Aus Blech wird VM

Darin werde ich von meinem Projekt zur Migration von physikalischen SQL Server in eine VMware Umgebung berichten. Grundlage sind unter anderem die Blogbeiträge auf diesem Blog mit dem entsprechenden Tag: https://blog.volkerbachmann.de/tag/sql-on-vmware/

Sehen wir uns dort?

Viele Grüße,
Volker

 

 

Projekt SQL Server auf VMware – Zusammenfassung

Zum Abschluss meiner Artikelserie zur “Migration von physikalischen SQL Servern in eine VMware Umgebung”, kommt hier nun noch die versprochene Zusammenfassung.

Das Projekt wird als abgeschlossen betrachtet, obwohl aktuell noch ein physikalischer SQL Server “übrig” ist zu virtualisieren. Das hängt zum einen damit zusammen, dass der Server noch bis Ende 2019 im Hardware Support ist, andererseits aber auch mit der noch unklaren Backup Strategie der virtuellen Maschinen.

Dazu muss ich noch ein wenig weiter ausholen. Derzeit werden die SQL Datenbanken mit Standard Backup Methoden (nativ oder mit Redgate SQLBackup) gesichert (Voll- und Log-Sicherungen) auf einen zentralen Server von dem aus dann täglich Sicherungen in unsere Backup Lösung Quest Rapid Recovery (vormals Dell AppAssure) übertragen werden.
Zusätzlich werden die Produktiv Datenbanken per LogShipping alle 15 Minuten auf separate SQL Server gesichert als Disaster Recovery Lösung.

Ein Wiederanlauf einer Datenbank auf dem LogShipping Server dauert geschätzte 30-60 Minuten wenn die richtigen Ressourcen da sind. Dafür sind die folgenden Aktionen notwendig:

  1. Zugriff auf alte Datenbanken sperren – falls nicht durch Ausfall des alten Servers so und so geschehen.
  2. entsprechende Datenbank auf dem LogShipping Server aktivieren, d.h. aus dem Recovery Status rausholen, nachdem das letzte Log angewendet wurde.
  3. Konfigurationsdateien editieren bzw. entsprechenden CNAME im DNS Server auf den LogShipping Server ändern.
  4. Danach kann die Anwendung wieder neu gestartet werden.

Soweit so gut für die Datenbanken. 😉

Allerdings ist die VM damit noch nicht gesichert, d.h. bei einem Fehler in der VM zum Beispiel durch Updates usw. gibt es kein Backup der kompletten VM. Durch das fehlende zweite SAN (siehe dazu den ersten Artikel meiner Blog Serie) ist eine Replikation der VMs, und damit eine Sicherung derzeit auf diesem Weg nicht möglich. Ich teste aktuell zwei unterschiedliche VM Backup Programme von Veeam und Vembu um die VMs doch irgendwie sichern zu können, bisher aber funktioniert das noch nicht vollautomatisch.

Da die VMs über eine Vorlage erstellt werden können, dauert die Erstellung zwar nicht so lange wie die Neuerstellung aus einem ISO, eine Komplett-Neueinrichtung mit sämtlichen Konfigurationen (RAM, CPU, Agenten-Jobs, Linked Server, Benutzern usw.) dauert aber dann doch ein wenig länger als eine Rücksicherung einer VM.
Bei einer Migration ist der Quellserver vorhanden um schnell die Konfiguration zu übertragen (mittels dbatools), das geht bei einer defekten VM dann im Zweifelsfall nicht mehr.

Hier sehen wir noch Verbesserungs-Potenzial. Entweder eine regelmäßige Sicherung der VMs oder die Replikation auf ein zweites SAN.

Das ist im Grunde die Erklärung warum wir das Projekt als abgeschlossen betrachtet haben, obwohl noch nicht alle Server virtualisiert sind. Ein weiteres VMware Projekt ist für 2019 geplant, dort werden die Überlegungen mit einfließen.

Zu der Zusammenfassung gehört aber auch der Vergleich vorher/nachher. Dieser beinhaltet auch die Beobachtung ob die drei Hardware Server (ESX Hosts) die Last der ehemals eigenständigen physikalischen SQL Server übernehmen können.
Wir haben dabei die Anzahl der physikalischen Cores reduziert von gesamt 116 auf nun 72 in der VMware Umgebung. Der übrig gebliebene physikalische Server ist schon in der VMware Umgebung vorgesehen, aber noch nicht mit der endgültigen Core Anzahl versehen.

Wie wir feststellen konnten, ist die VMware Umgebung in der Lage alle physikalischen Server abzubilden. Aktuell sind die Ressourcen in den VMs noch nicht reserviert und damit fest zugeordnet. Das wird spätestens mit den restlichen VMs dann notwendig werden.

Hier endet nun die Artikelserie.

Vielen Danks fürs Lesen,
Volker

Das sind die Links zu den anderen Teilen der Artikelserie:

Teil 1: Einführung oder das “Warum?”
Teil 2: Konfiguration VMware Umgebung
T
eil 3: VMware Guest Konfiguration
Teil 4: Migration of SQL Server with PowerShell dbatools

Error in Power BI RS Reports with an Apostrophe in name or foldername

Since we upgraded to Power BI Report Server (https://blog.volkerbachmann.de/2017/11/08/upgrade-sql-server-2016-reporting-server-to-sql-2017-and-pbi-rs/) there are problems with paginated Reports that have an Apostrophe (‘) in the file or foldername. Because we use it for french reports, there were some reports where this is used. The problem arises when one needs to manage or change the report. Execution works normally.

There is an issue noticed in the Power BI community forum: http://community.powerbi.com/t5/Issues/Apostrophe-in-Folder-Name-Causes-Menu-Failure/idi-p/304933


Seitdem wir auf den Power BI Reporting Server (2017) upgegradet haben (Link zu dem Artikel oben), gibt es Schwierigkeiten bei der Verwaltung von seitenbasierten Berichten die ein Apostrophe (‘) im Namen oder dem Ordner-Namen enthalten. Diese lassen sich aktuell nicht anpassen, es sei denn man nennt den Report oder den Ordner um und entfernt dabei dieses Sonderzeichen.

In der Power BI Community existiert ein Issue Eintrag dazu. Es besteht die Hoffnung dass der Fehler in Kürze behoben wird.
http://community.powerbi.com/t5/Issues/Apostrophe-in-Folder-Name-Causes-Menu-Failure/idi-p/304933

Danke fürs Lesen.

Gruß,
Volker

dbWarden SQL Server Monitoring Script with SQL Server 2017

[german version below]

I’m still using the free dbWarden Monitoring Scripts for an easy basis monitoring of our SQL Server Environment. Links to the orignal documentation is at the bottom of this short blog article.

With SQL Server 2017 there is a new change neccessary for the Health Report to run properly. It’s nearly the same point that I described on January for SQL 2012 and above (german only).
The Replication Helper procedure is changed and need aditional parameters.
It is the sp_helpdistributor Store Procedure. dbWarden uses a temporary table to receive the results from the helper SP, defined like this:

The call to the SP (EXEC sp_helpdistributor) at the bottom caused the error:
“Column name or number of supplied values does not match table definition”

The two fields (deletebatchsize_xact and deletebachsize_cmd) needs to be added to the temporary table definition before the insert command can be executed without error.
This all is found in the Stored Procedure rpt_HealthReport which is called by the Agent Job Health Report.

Ressources (below)

Thanks for reading,
Voloker


Ich benutze immer noch die kostenlosen dbWarden Monitoring Scripts für eine einfache Überwachung unserer SQL Server-Umgebung. Links zur Originaldokumentation finden Sie am Ende dieses kurzen Blog-Artikels.

Mit SQL Server 2017 ist eine neue Änderung erforderlich, damit der Health-Bericht ordnungsgemäß ausgeführt werden kann. Es ist fast derselbe Punkt, den ich im Januar für SQL 2012 und höher beschrieben habe.
Die Replication Helper Prozedur sp_helpdistributor wurde geändert und benötigt zusätzliche Parameter.

Der Aufruf der SP (EXEC sp_helpdistributor) verursachte den Fehler:
“Spaltenname oder Anzahl der angegebenen Werte stimmt nicht mit der Tabellendefinition überein”

Die beiden Felder (deletebatchsize_xact und deletebachsize_cmd) müssen der temporären Tabellendefinition hinzugefügt werden, bevor der insert-Befehl ohne Fehler ausgeführt werden kann.
Dies ist in der Store-Prozedur rpt_HealthReport zu finden, die vom Agent-Job-Healt Report aufgerufen wird.

Ressourcen:

Danke fürs Lesen!
Volker

Upgrade SQL Server 2016 Reporting Server to SQL 2017 and Power BI Reporting Server (PBIRS)

[german version below]

 


Hello Power BI community,

after our reporting department discovered Power BI, our existing Reporting Server 2016 Enterprise (including SA) should be updated as soon as possible to the new SQL Server 2017 with Power BI Report Server.

Environment: VM with Windows Server 2016 Datacenter plus SQL Server 2016 with integrated Reporting Server.

  1. Backup 🙂
    1. Backup VM with Veeam Backup.
    2. Snapshot of the VM in addition to A.
    3. Backup of the report server key – very important !!
    4. Backup of the Configuration of the Report Server; take screenshots of each page.
    5. Backup the rs.config or better the whole MSRS13 directory.
  2. Update SQL Server 2016
    1. Start SQL Server 2017 installation and select the update of the edition
    2. During the upgrade it is pointed out that the Reporting Server part will be uninstalled during the update. This must be confirmed with a hook!
    3. After the update the server needs a reboot.
      The SQL Server Reporting service and almost the complete directory MSRS13.MSSQLServer was gone afterwards – except for LogFiles and RSTempFiles.
  3. Install Power BI Reporting Server (Edition 10/2017).

    1. The serial number for the Power BI Premium Reporting Server on Premise can be found in the VLSC portal since we purchased the Enterprise SQL Server with SA.
      https://powerbi.microsoft.com/en-us/documentation/reportserver-find-product-key/
    2. After the installation the configuration will follow, and now it will be interesting:
      1. The configuration must be set up on the same database, URL, etc. as was previously the case with the Reporting Server 2016.
      2. After applying the URLs a warning comes up saying that the appropriate options existed already before and are updated now with the new attitudes.
        That’s OK and we move on to the next settings.

      3. After the activation of the previous settings, the PBIRS can be reached under the same URL, has all reports activated with the same subscriptions that were there before.
  4. The migration is now complete and you can now additionally upload Power BI reports to the new server.
    For communication with the server, an alternative PBI client is available with which Power BI reports can be uploaded directly from the client and loaded from them.

The download of the other client, like the PBI Report Server itself, can be reached via this URL: https://aka.ms/pbireportserver

Thanks you for reading!
Volker


Hallo Power BI Gemeinde,

Nachdem unsere Reporting Abteilung Power BI entdeckt hat, sollte schnellstmöglich unser vorhandener Reporting Server 2016 Enterprise (inkl. SA) auf den neuen SQL Server 2017 mit Power BI Report Server upgedatet werden

Umgebung: VM mit Windows Server 2016 Datacenter plus SQL Server 2016 mit integriertem Reporting Server.

  1. Sicherung 🙂
    1. VM mit Veeam Backup
    2. Snapshot der VM zusätzlich
    3. Sicherung des Berichtsserver Schlüssels – ganz wichtig!!
    4. Sicherung der Konfiguration des Berichtsservers, am besten Screenshots der einzelnen Seiten anfertigen.
    5. Sicherung der rs.config bzw. des ganzen MSRS13 Verzeichnisses.
  2. SQL Server 2016 aktualisieren
    1. SQL Server Installation starten und die Aktualisierung der Edition auswählen
    2. Während der Aktualisierung wird darauf hingewiesen dass der Reporting Server Teil bei der Aktualisierung deinstalliert wird. Das muss zusätzlich bestätigt werden mit einem Haken!
    3. Nach der Aktualisierung war dann bei mir ein Neustart notwendig.
      Der SQL Server Reporting Dienst und auch fast das komplett Verzeichnis MSRS13.MSSQLServer war danach weg – bis auf LogFiles und RSTempFiles.
  3. Power BI Reporting Server (Version 10/2017) installieren.
    1. die Seriennummer zum Power BI Premium findet sich für uns im VLSC Portal da wir den Enterprise SQL Server mit SA gekauft hatten.
      https://powerbi.microsoft.com/de-de/documentation/reportserver-find-product-key/
    2. Nach der Installation folgt dann die Konfiguration, und jetzt wird es interessant:
      1. die Konfiguration ist auf die gleiche Datenbank, URL usw. einzurichten wie das vorher beim Reporting Server 2016 war.
      2. bei den URLs kommt bei der Speicherung der Hinweis, dass die entsprechenden Einstellungen vorher schon existierten und nun mit den neuen Einstellungen aktualisiert werden. Das ist OK und wir gehen weiter zu den nächsten Einstellungen.
      3. Nach der korrekten Aktivierung der vorherigen Einstellungen ist der PBIRS unter der selben URL erreichbar, hat alle Reports und Abos die vorher da waren auch wieder aktiv.
  4. Die Migration ist damit abgeschlossen und es können nun zusätzlich Power BI Reports auf den neuen Server hochgeladen werden.
    Für die Kommunikation mit dem Server ist ein alternativer PBI Client verfügbar mit dem Power BI Berichte direkt aus dem Client hochgeladen werden können und daraus auch geladen werden können.

Der Download ist, wie auch der des PBI Report Server selber, über diese URL zu erreichen: https://aka.ms/pbireportserver

Danke fürs Lesen!
Volker

#PSBlogWeek – free PDF with my “SQL Server Migration with PowerShell dbatools” now available

Hello,

a compilation of all the PowerShell articles of the #PSBlogWeek 2017 is now available.
Included is my article about “SQL Server Migration with PowerShell dbatools”.

This is the PDF: PSBlogWeek eBook PowerShell-Server-Management

And here is the link to the original announcement from Adam Bertram (t | b)): http://www.adamtheautomator.com/psblogweek-powershell-blogging-entire-week/

You’ll find Links to the original six blog articles on the activity Page from #PSBlogWeek.

http://psblogweek.com/psblogweek-activity

Thanks go out to all contributors and for Adam for hosting the event and for compiling the eBook.

Thanks for reading!
Volekr

 

Project SQL on VMware – Migration from Physical to Virtual

Migration of physical SQL Server to a new VMware Environment.

As announced in my last post about Migration of SQL Server with PowerShell dbatools #PSBlogWeek I’ll write a recap of the first three parts of the project – and here we come.

Index:

  1. Why (do we need to do this project?)
  2. Ordering the hardware
  3. Configuration of the VMware environment
  4. Detail configuration of the individual virtual machines
  5. Migration of SQL Server with PowerShell dbatools (separate article)

It all begun with the idea of reducing the amount of seven physical server we use for our SQL Server databases.

Why?

  • old servers are running out of support and have to be replaced.
  • the amount of SQL Server Core licenses can be reduced.
  • better High Availability and shorter restore times in case of a Desaster Recovery.
  • fewer unused ressources of the physical server.

To analyze and select the hardware, we used a Dell tool (DPack) to record the performance data of the seven physical servers for over a week. We saw a peek there of 11.000 IOPS that needs to be fulfilled by the VMware Environment.

IOPS of the physical SQL Server Environment

But that’s only a peek and therefore considered as the maximum value. The mean at 95% is 2475 IOPS.

Here is the (german) summary for the 7 servers with the determined data.DPack summary

It says we need 661 MB/s, or 11074 IOPS (2475 by mean 95%) for the seven server with 22 CPUs and 116 Cores.

Therefore a selection of 3 Dell Poweredge servers (R730), two Dell 10G switches (S4048T) and two SANs (Compellent SC4020 with SSDs) was proposed as a central storage.

For the VMware version, the Essentials Plus version is enough for us, as this will be a completely independent environment. Windows 2016 Datacenter with the correct core number serves as the operating system in the VMs.
The SQL Server are licensed with SQL Server 2016 Standard incl. SA in the Core variant, a single one is raised to Enterprise 2016 for a Mobile Reports Reporting Server (SSRS).

The Order

Unfortunately, one of the two SANs was canceled for the order due to financial reasons, which reduced the total price by about 50K. The lack of the SAN does not affect the topic of HA to any great extent, but is crucial for disaster recovery.
In the event of a complete failure of the SAN, the mission-critical productive databases are no longer available on another SAN in a very short time, but must be restored from the backup. Here the goal of the shorter Disaster Recovery times is in my opinion again far off.


This section will discuss how to configure the VMware environment that was run during the initial installation by a Dell Remote engineer.
The technology behind the implementation:
The three servers (Dell PowerEdge R730) each contain 2 CPUs with 12 cores, that is a total of 72 cores and 512 GB RAM each. There are two micro SD cards for the operating system in the servers (VMware 6.0 – 6.5 is not yet supported by the SAN). The Compellent SC4020 SAN has 9 SSDs of 3.82 TB each, one of which is configured as a HotSpare. The volumes are available as RAID 6 or 10.
Server and SAN are connected via the Dell S4048T switches with 4 x 10G each.
First, the physical installation of the environment was done in the data center. The problem arose that the switches were delivered without OS. So it was necessary for the switches and then the two controllers of the SAN to provide serial ports to the device. My colleague provided me with a 4-way serial to USB / network adapter based on a Raspberry. At the same time, FTP, TFTP and several other servers proved to be very helpful for the setup.
At the same time, the servers and the SAN could still be reached via their management ports (Dell iDRAC), which was also necessary for the basic installation.
On the basis of the basic structure (picture below) you can see the architecture of the area well. At the top the two servers, including the two switches to which then the SAN is connected. Of course, everything is wired twice to achieve the greatest possible reliability.
Installation structure server, switch and SAN
URL from the Dell Installation Guide for the Compellent SAN SC 4020
(The source and copyright of the artwork are with Dell.)
The environment shown in the picture differs from ours only that two servers are used here, but three servers are used in our environment as hosts. Of course, there are other internal connections between servers, switches and the SAN to separate, for example, VMware, VMotion, iSCSI and other traffic from the actual traffic to the rest of the network. Its connection is then made via two breakout cables with 4 x 10G which are connected directly to our central core switches.
The basic installation was then carried out by the mentioned Dell Remote technician.
• Server firmware updates
• VMware basic installation on the SD cards
• Switch OS installation and configuration
• SAN setup, creation of volumes
• iSCSI setup in VMware and connection of SAN volumes
During the subsequent tests of the various failure scenarios (switch, server, SAN controller) an error occurred during the reboot of one of the switches. Dell support has then replaced the switch directly.

This section is now about the detail configuration of the individual virtual machines.

This SQL Server VMware environment is now the third stand-alone VMware environment. Each environment consists of three ESXi hosts connected to one or two SANs.
So far, in the second or “Application-VMware” called environment, we have not thought so much about the VMs and their possible performance requirements.
But that changed with the introduction of this SQL Server on VMware environment, as we wanted to turn high performance productive servers from physics to virtual without sacrificing speed.

Basically it was, as described in the beginning of the article, of course, the replacement of existing hardware with new – in the form of VMware Hosts. Behind it was, of course, in the implementation and the requirement that it should not slow down in any case.
That’s why I’ve studied the requirements of SQL Server in such an environment and read various guides and best practices. These are available from Microsoft, VMware and several other sources (I have linked a part of them below). Particularly helpful was a series of articles by David Klee on the page sqlservercentral.com called “Stairway to SQL Server Virtualization” (1).
The majority of the following settings / configurations is taken from this article because there, unlike the other sources, directly and understandably the most important points are highlighted and shown by example.

1. Storage / Disk Partitioning
Basically the same applies here as for physical hardware: distribution of access to as many disks as possible, or here SAN paths (LUNs). Since this is often the same target on the SAN, only distinguished by different RAIDs (6 or 10), the only noticeable difference is the buffer of each individual path (LUN).
This also means that the correct RAIDs have to be passed through the different LUNs to the VM. Accordingly, hard disk configuration and allocation is as complex as in “physical life”.
It looks like this, for example – closely following the recommendations of David Klee (1):

Disk Configuration for the SQL Server VM

As mentioned in the article, it is also of particular importance to use the correct hard disk adapter, ie controller (see SCSI ID), not the standard but the Paravirtualized Adapters (Paravirtual SCSI Driver – PVSCSI). For partitions C: (OS) and Y: (Pagefile) in the table above, this can be left with the default LSI SAS Controller.
The other should be set up on the mentioned PVSCSI. However, the corresponding controller and thus the drives are only present in the VM when the VMware Tools are installed.
Unfortunately, the assignment of the hard disks to the appropriate controller now only works in the vSphere web client.

Disk Configuration in the vSpere Web Client

The hard disk type used is fixed disk sizes (thick provision lazy-zeroed or eager-zeroed) as static VHDs. Although thin provision may be more economical in the beginning when consuming disk space, it will consume additional resources when the VMDK files are enlarged in the background because the space is needed. This is – similar to the autogrowth of a database – most often exactly the wrong time and then possibly causes problems in the VM.

Disk Drive characteristics

2. CPU
The CPU settings are also a bit more complicated.
For the whole topic, note how many physical CPUs the ESX Host has the VMs on. This should be considered as the maximum that should be allocated in the VMs – an overbooking of the CPU resources, assigning more CPU than the host has available does not make sense for SQL Server.
In addition, one could still consider in connection with the size of the memory (point 3) the subject NUMA (Non Uniform Memory Access). This means that accessing the memory directly allocated to a CPU is faster than accessing “remote” memory. To what extent this has an impact on the performance of the VMs, I can only estimate so far, as I have not received any information from Dell, which tells me the correct configuration of our servers used.
The basic CPU settings are simply distributed between the virtual sockets (processors) and cores per socket (cores per processor).
In the screenshot you can see that there are 12 cores associated with this VM, which then have to be licensed under current license conditions for the SQL Server (no guarantee!).

CPU configuration

3. RAM
The allocation of the necessary memory for a VM mainly depends on the size of the databases, which should be as completely as possible in the main memory for performance reasons. To add is still a share for OS and SQL Server itself. The screenshot above currently has 192 GB allocated.
As already indicated in the case of CPU there are still missing technical details missing for the configuration.
4. Network
For the network, you should use VMXNET3 adapters, which also do not arrive in the VM until the VMware Tools are installed. Basically, there is still room for teaming two network adapters in the VM – not necessarily as a redundancy topic, but again with two additional buffers Speed ​​advantage – though probably minimal. That still needs to be tested. This point was later added by another document from Idera “Moving SQL Server to a virtual Platform” (6). For this reason, this is also not included in the original configuration.

The entire configuration refers to the settings of one VM. Of course, the complete installation incl. SQL Server was finally migrated to a template to quickly create another SQL Server.

More settings:
• Set Powerplan to highest performance – in the VM as well as the ESXi hosts.
The configuration for the VM goes, in addition to the way through the energy settings, also with the Powershell dbatools with the command Test-DbaPowerPlan (https://dbatools.io/functions/test-dbapowerplan)

More about the dbatools in the 4th article of this article series “Migration”:
https://blog.volkerbachmann.de/2017/10/19/migration-of-sql-server-with-powershell-dbatools/

Sources:
1. Stairway to SQL Server Virtualization by David Klee on sqlservercentral.com
http://www.sqlservercentral.com/stairway/112551/
2. Understanding NUMA – https://technet.microsoft.com/en-us/library/ms178144(v=sql.105).aspx
3. VMware Best Practices SQL Server – http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf
4. VMware Performances Best Practiceshttps://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-perfbest-practices-vsphere6-0-white-paper.pdf
5. Moving SQL Server to a Virtual Platform from Idera (registration required)https://www.idera.com/resourcecentral/whitepapers/movingsqlservertoavirtualplatform
6. Brent Ozar’s page on Virtualization Best Practices: https://www.brentozar.com/sql/virtualization-best-practices/ (the page does not seem to exist anymore)

Topics for the other distributions of the project
• Describe migration of a SQL Server with dbatools, W2K16, SQL 2016 SP1 – done in this article:
https://blog.volkerbachmann.de/2017/10/19/migration-of-sql-server-with-powershell-dbatools/
• Performance comparison physical <-> virtual – to be done!

Any comments are welcome!

Thanks for reading,
Volker

Migration of SQL Server with PowerShell dbatools #PSBlogWeek

This article is about server management with PowerShell and is part of the #PSBlogWeek series (http://psblogweek.com) , created by Adam Bertram.

Index:

  1. Introduction to dbatools
  2. Migration Prerequisites
  3. Best Practices
  4. Migration
  5. References

It is also part of my blog series about migrating our physical SQL Server to a VMware Environment. For now, all of these articles are in German only – sorry. The first three articles describe the basic server configuration, installation, and VM guest configuration of the VMware Environment. This article describes the migration itself.
I’ll write a recap of the whole series in English later on. 🙂

  1. Introduction to dbatools

I got in contact with PowerShell some years ago, but I wasn’t satisfied with what needed to be done to maintain SQL Server.

However, Microsoft has made a lot of improvements since then, and with contributions from several PowerShell Experts and MVPs – such as Chrissy LeMaire, Claudio Silva, Rob Sewell, Constantine Kokkinos and many more, there is now a module that helps to maintain SQL Server 2005+. It’s called dbatools, and you can find it here https://dbatools.io. The project is hosted on GitHub and the module is available totally free of charge!

The dbatools community has grown to over 50 contributors with more than 300 SQL Server best practice, administration and migration commands. An overview of the commands can be found here: https://dbatools.io/functions/.

2. Migration Prerequisites

Now, let’s turn our attention to the prerequisites for the migration of a physical SQL Server 2008 to a VMware-based SQL Server 2016 on Windows Server 2016. The positive thing here was that there was no need to reinstall everything  on the same physical hardware over the weekend. Instead, we bought a totally new VMware Environment with three Dell servers, two network switches, and new storage. There was enough time to test the new SQL Server, the SAN, and build a good configuration for the virtual machines. Most of the VM configuration is based on the blog series “Stairway to Server Virtualization” by David Klee, which can be found on SQL Server Central.

For migration purposes, we installed an additional Windows Server 2016 with PowerShell 5, with SQL Server 2016 as an admin workstation. On the SQL Server, we installed the dbatools by using the easy install-module command:

During installation, you may get a confirmation dialog prompting you to accept installation of the NuGet Package Manager. You should accept; otherwise, you’ll need another installation option. These options are described on the dbatools website: https://dbatools.io/download.

The dbatools module is in permanent development – meanwhile, they are near the first major release 1.0 – so you should check for the latest version and update often. Updating is as easy as the installation:

dbatools available versions

 

 

On the screenshot we see five versions of the tools installed, so we have to activate the latest version with the comand import-module.

With the last command above you get a quick overview of all the dbatools commands.

After installation of the base SQL Server VM we need to check some basic configuration options first. dbatools can help us with this as well. 🙂

All commands are created by experts with references to the corresponding articles where the code comes from.

3. Best Practices

  • Max Memory
    • Test-DbaMaxMemory
    • This tests the actual max memory setting against the recommended setting.
  •  TempDB
    • Test-DbaTempDbConfiguration
    • With SQL Server 2016, you get the option to configure the tempdb configuration during installation, but not with older versions. With this command, you can control and later adjust it.
    • Evaluates tempdb against a set of rules to match best practices. The rules are:
      TF 1118 Enabled: Is Trace Flag 1118 enabled? (See KB328551)
      File Count: Does the count of data files in tempdb match the number of logical cores, up to 8?
      File Growth: Are any files set to have percentage growth? Best practice is that all files have an explicit growth value.
      File Location: Is tempdb located on the C:\ drive? Best practice says to locate it elsewhere.
      File MaxSize Set (optional): Do any files have a max size value? Max size could cause tempdb problems if it isn’t allowed to grow.
    • Test-DbaTempDbConfiguration
    • The right configuration can be set by using the corresponding Set- command
      A service restart is necessary after reconfiguration, see following screenshot:
    • Set-DbaTempDbConfiguration
  • Disk
    • Test-DbaDiskAlignment
      • This command verifies that your non-dynamic disks are aligned according to physical requirements.
      • Test-DbaDiskAlignment -ComputerName sqlserver01| Format-Table
      • Test-DbaDiskAlignment
    • Test-DbaDiskAllocation
      • Checks all disks on a computer to see if they are formatted to 64k block size, the best practice for SQL Server disks.
      • Test-DbaDiskAllocation -ComputerName sqlserver01 | Format-Table
      • Test-DbaDiskAllocation
  • PowerPlan
    • Test-DbaPowerPlan
      • The Power Plan should be set to High Performance on every SQL Server.
      • Test-DbaPowerPlan -ComputerName sqlserver01
      • Test-DbaPowerPlan

 

  • SPN
    • We use DNS CNAMEs for referring to our SQL Server (See the article “Using Friendly Names for SQL Servers via DNS” below). We need to adjust the SPN settings manually. That is easy with these commands:
      Get-DbaSpn and Set-DbaSPN
  • SQL Server Name
    • We created a Single VM template where all SQL Server are created from. With CPU, Memory and Disk Layout as described in the Stairway I mentioned above (1).
    • After creating a new VM out of the template the server name changes but the internal SQL Server name does not. Help comes again with dbatools command Repair-DbaServerName
      Works fine for me!

4. Migration

  • Now for the best part – the migration itself. You normally only need a single command to migrate everything from one SQL Server to another. As described in the Help documentation, this is a “one-button click”.
    Start-DbaMigration -Source sql2014 -Destination sql2016 -BackupRestore -NetworkShare \nas\sql\migration
  • This migrates the follwing parts as listed below. Every part can be skipped with a -no*** parameter as described in the Help documentation – for example, use -NoLogins if you don’t want to transfer the logins.
    • SQL Server configuration
    • Custom errors (user-defined messages)
    • SQL credentials
    • Database mail
    • User objects in system databases
    • Central Management Server
    • Backup devices
    • Linked server
    • System triggers
    • Databases
    • Logins
    • Data collector collection sets
    • Audits
    • Server audit specifications
    • Endpoints
    • Policy management
    • Resource Governor
    • Extended Events
    • JobServer = SQL Server Agent
  • If any error comes up, use the functions, that are called out of the Start-DbaMigration commands step by step.
  • Keep in mind that the server configuration is also part of the migration, so min and max memory and all other parameter in sp_configure are transferred. If you want to keep this settings as set by the best practices commands, you should skip the configuration during transfer. Use -NoSpConfigure!
  • So what is missing in the moment?
    • Most of the special parts of the additional services:
      • SSIS
      • SSAS
      • SSRS
  • You can test the whole migration with the -WhatIf parameter, which shows what’s working and what isn’t. Sometimes the connection to the target computer isn’t working because PowerShell remoting is not enabled (see above).
    There is a command to test the connection to the server, and you can find that here:
    https://dbatools.io/functions/test-dbacmconnection
    There is no need for updating the new server to the latest version of PowerShell, Version 3.0 is enough.
  • The whole command looks like this for me:
    • Start-SqlMigration -Verbose -Source desbsql1 -Destination desbaw2 -BackupRestore -NetworkShare \\DESBAW2\Transfer -DisableJobsOnDestination -Force
  • The parameter DisableJobsOnDestination is extremly helpful when you go to the next step and test the migration itself. When you do this more than once, you also need the parameter –Force, which overwrites the target objects (logins, databases and so on) if they exist from a previous test.
  • The parameter -Verbose is useful when an error comes up and you need to dig deeper into the problem.
  • Before we wrap up, her’s a link to a YouTube video that shows how fast the migration works. Of course it’s all going to depend on the size of your databases:
    https://youtu.be/PciYdDEBiDM

5. References:

  1. Stairway to SQL Server Virtualization by David Klee
  2. Using Friendly Names for SQL Servers via DNS

Thanks for reading,
Volker Bachmann