I ran across an interesting a couple of weeks ago when working with a client. The client has several subsidiaries each with their own vNet. The client had a site to site VPN been the Azure vNets. All traffic was successfully crossing the Azure Site to Site VPN as expected. The sticking point was that a software licensing server running in one of the subsidiaries Azure infrastructure configurations. The software licensing software simply wasn’t working.
We fired up Wireshark on Azure VM which was running the software as well on the Azure VM which was running the licensing software. In Wireshark on the VM running the software, we could see the software trying to talk to the licensing software. On the licensing server, we could see the connection request come in, and we could see the response from the licensing software going back to the client. But we looked on the VM running the software we couldn’t see the packet coming back from the licensing server. So the network traffic was simply getting blocked, somewhere. We didn’t have any network security groups set up, and we didn’t have any software firewalls in place. So nothing should be blocking traffic.
We looked at the response that was coming from the licensing server, and it had the DoNotFragment bit set on the response network packet. Now the sure weird thing is that the packet was only 1430ish bytes in size. So it would have fit within the 1500 byte packet, so there was zero chance of the packet being fragmented. But the bit was being set within the vendor’s software, so we didn’t have any way to remove that flag.
We were able to fix it, by changing from a Site to Site VPN to a peered network connection between the two vNets. Changing the network connection to a peer allowed the software licensing process to work as expected and solved the problem.
So if you have software which requires the DoNotFragment bit in your connection, then an Azure Site to Site VPN isn’t going to work for you. If you are doing everything in Azure a peer can work while a Site to Site VPN doesn’t work.
If you’ve been working with WordPress for a while now, you know that it’s a pretty solid platform for blogging, posting content.
However, the WordPress database was clearly made by developers and there wasn’t a DBA involved. And this isn’t just WordPress, but this goes for some of the plugins as well.
I’ve got Query Monitor installed on our WordPress installation, and it pops up with slow queries every once and a while. So I figured that I’d look at the queries, and look at the indexes on the database and see what I can do about this.
Needless to say, there’s a few indexes that needed to be added.
I’m assuming that you are using the prefix wp_ on all your tables, which is the default. If you are using a prefix other than this, you’ll need to adjust this index creation scripts.
The first one is against the wp_options table.
create index dcac_option_name_autoload on wp_options (option_name, autoload);
The next one to create is against wp_term_taxonomy.
create index dcac_taxonomy on wp_term_taxonomy (taxonomy, term_taxonomy_id, term_id, parent, count, description(400));
The third index to be created against one of the WordPress tables is against the wp_terms table.
create index dcac_name on wp_terms (name, term_id, slug, term_group, term_order);
The fourth index and fifth indexes that I’ve found that you need to create are actually against one of the Yoast plugin tables, but since most people have the Yoast plugin installed, you’ll want this index as well.
create index dcac_id_permalink_update_at on wp_yoast_indexable (id, permalink(10), updated_at);
create index dcac_object_type on wp_yoast_indexable (object_type, object_sub_type);
These indexes should help your WordPress system work more efficiently as it will be easier for the MySQL database that is behind your WordPress database to be able to find the data that it needs to in order to run your website.
None of these indexes are going to shave seconds on your page load times, but if they each save 100-200 milliseconds off your page load time, that’s close to a second total, and that’s a decent amount of time for queries that happen on each page load.
As I run across more indexes that need to be created, I’ll post them as I can.
If you aren’t sure how to run MySQL scripts against your database, there’s a variety of ways so if you aren’t sure how to run SQL scripts against your WordPress database, check with your hosting provider.
Looking to move your WordPress website to Microsoft Azure? The team at DCAC can help you migrate to a Cloud Services solution.
One of the benefits of cloud computing is flexibility and scale—I don’t need to procure hardware or licenses as you get new customers. This flexibility and platform as a service offerings like Azure SQL Database allow a lot of flexibility in what independent software vendors or companies selling access can provide to their customers. However, there is a lot of work and thought that goes into it. We have had success with building out these solutions with customers at DCAC, so in this post, I’ll cover at high level some of the architectural tenants we have implemented.
Authentication and Costing
The cloud has the benefit of providing detailed billing information, so you know exactly what everything cots. The downside to this is that the database provided is very granular and detailed and can be challenging to breakdown. There are a couple of options here—you can create a new subscription for each of your customers which means you will have a single bill for each customer, or you can place each of your customers into their own resource, and use tags to identity which customer is associated with that resource group. The tags are in your Azure bill and this allows you to break down your bill by each customer. While the subscription model in cleaner in terms of billing, however it adds additional complexity to the deployment model and ultimately doesn’t scale.
The other thing you need to think about is authenticating users and security. Fortunately, Microsoft has built a solution for this with Azure Active Directory, however you still need to think about this. Let’s assume your company is called Contoso, and your AAD domain is contoso.com. Assuming you are using AAD for your own business’s users, you don’t want to include your customers in that same AAD. The best approach to this is to create a new Azure Active Directory tenant for your customer facing resources—in this case called cust.contoso.com. You would then add all of the required accounts from contoso.com to cust.contoso.com in order to manage the customer tenant. You may also need to create a few accounts in the target tenant, as there are a couple of Azure operations that require an admin from home tenant.
Deployment of Resources
One of the things you need to think about is what happens when you onboard a new customer. This can mean creating a new resource group, a logical SQL Server, and a database. In our case, it also means enabling a firewall rule, and enabling performance data collection for the database, and a number of other configuration items. There are a few ways you can do this—you can use an Azure Resource Manager (ARM) template, which contains all of your resource information, which is a good approach that I would typically recommend. In my case, there were some things that I couldn’t do in the ARM template, so I resorted to using PowerShell and Azure Automation to perform deployments. Currently our deployment is semi-manual as someone manually enters the parameters into the Azure Automation runbook, but it could be easily converted to be driven by an Azure Logic App, or a function.
Deployment of Data and Data Structures
When you are dealing with multiple databases across many customers, you desperately want to avoid schema drift that can happen. This means having a single project for all of your databases. If you have to add a one-off table for a customer, you should still include it in all of your databases. If you are pushing data into your tables (as opposed the data being entered by the application or users) you should drive that process from a central table (more to come about this later).
Where this gets dicey is with indexes, as you have may have some indexes that are needed for specific customer queries. In general, I say the overhead on write performance of having some additional indexes is worth the potential benefit on reads. How you manage this is going to depend on the number of customer databases you are managing—if you are you have ten databases, you might be able to manage each databases indexes by themselves. However, as you scale to a larger number of databases, you aren’t going to be able to manage this by hand, Azure SQL can add and drop indexes it sees fit, which can help with this, but isn’t a complete solution.
Hub Database and Performance Data Warehoue
Even if you aren’t using a hub and spoke model for deploying your data, having a centralized data repository for metadata about your client databases. One of the things that is a common task is collecting performance data across your entire environment. While you can use Azure SQL Diagnostics to capture a whole lot of performance information in your environment, with one of our clients we’ve taken a more comprehensive approach combining the performance data from Log Analytics, Audit data that also goes to Log Analytics, and the Query Store data from each database. While log analytics contains data from the Query Store, there was some additional metadata that we wanted to capture that we could only get from the Query Store directly. We use Azure Data Factory packages (which were built by my co-worker Meagan Longoria (b|t) to a SQL Database that serves as a data warehouse for that data. I’ve even built some xQuery to do some parsing of execution plans, to identity which tables are most frequently queried. You may not need this level of performance granularity, but it is a talk you should have very early in your design phase. You can also use a 3rd party vendor tool for this—but the costs may not scale if your environment grows to be very large. I’m going to do a webinar on that in a month or so–I need to work it out the details, but stay tuned.
You want to have the ability to quickly do something across your environment, so having some PowerShell that can loop through all of your databases is really powerful. This code allows you to make configuration changes across your environment, or if you use dbatools or invoke-sqlcmd to run a query everywhere. You also probably need to get pretty comfortable with Azure PowerShell, as you don’t want to have to change something in the Azure Portal across 30+ databases.
Recently we upgraded the networking in our CoLo from our existing horrible, not all the features work correctly, bought off eBay NetGear switches to a brand new (actually purchased new) Ubiquiti network stack. We went with Ubiquiti because they have a really good reputation, they have a fantastic price point, and the UI is really simple to use while giving us all of the features that we were looking for.
Like any good IT deployment, we hit a snag when we were pushing out out network configuration. All of our servers have 10 Gig network cards in them, and our SAN also has 10 Gig network cards for our NFS shares (we are a VMware vSphere shop), so we have a storage network. We also wanted to put our VMs on the 10 Gig cards, as they were on 1 Gig ports before and we wanted them to have more bandwidth available to them.
In the UniFi software on the Ubiquiti equipment has two different networking setups. The base network which we setup as our management network. Then any other subsets that need to be setup are configured, but they require a VLAN to be configured. We had a few networks to setup, and those were our Infrastructure network which we gave a VLAN of 4 to, our Storage network which we gave a VLAN of 5 to, and our lab which we gave a VLAN of 100 to.
Our VMware servers all have a dedicated NIC which we are using for our Management ports, so we didn’t need to have the Management network be accessible from the NIC that the VMs were going to use. Within the UniFi software I created was is called a port profile which can contain a variety of subnets. This way a single switch port can be on multiple subnets, which was exactly what I wanted. I wanted the 10 Gig ports and their NICs to be on the Storage, Infrastructure, and Lab networks. So I created a single port profile with all of these subnets in it. As you can see from the screenshot below, when you do this you select a netive network for the port profile.
After I got this setup, I was getting weird responses from the VMs and the VMware hosts that were trying to talk to the storage. I put VLAN Ids in VMware or all these networks as well, but things still working talking correctly.
It turns out, that whatever network you have configured as the native network, within VMware this means that you don’t put a VLAN ID for it. So in my case the storage network within VMware does not get a VLAN ID, while the other networks do; even through the storage network has a VLAN ID of 5 within the UniFi OS.
Once I did that, the storage for the VMs was able to talk perfectly and all the VM Subnets worked as expected.
As Microsoft MVP’s and Partners as well as VMware experts, we are summoned by companies all over the world to fine-tune and problem-solve the most difficult architecture, infrastructure and network challenges.
And sometimes we’re asked to share what we did, at events like Microsoft’s PASS Summit 2015.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.