Показаны сообщения с ярлыком architecture. Показать все сообщения
Показаны сообщения с ярлыком architecture. Показать все сообщения

понедельник, 4 июля 2016 г.

What will virtualization look like in 10 years?

Let's define "virtualization" for the start. Virtualization is abstraction from hardware itself, hardware microarchitecture. So, when we talk about virtualizion - it's not just about server hypervisors and VMware virtual machines. Software defined storage, containers, even remote desktops and application streaming - all of these are virtualization technologies.

As of today, mid of 2016, server virtualization is almost stalled in progress. No breakthroughs for last several years. Honing hypervisors for perfection, challengers follow the lead and difference is less and less with each year. So vendors fight now in the ecosystem - management, orchestrators, monitoring tools and subsystems integration. We are now surprised when someone wants to buy a physical server and install Windows on it instead of hypervisor. Virtual machines are no longer an IT toys, it's an industrial standard. Unfortunately sensible defense
scheme (from backups to virtualization-aware firewalls) is not yet standard feature.

Software Defined Everything, or we can say Virtualized Everything, grow enormously. Most of the corporate level storage systems are almost indistinguishable from standard x86 servers except of form factor. Vendors do not use special CPUs or ASICs anymore, putting powerful multicore Xeons in controllers instead. Storage system of today is actually just a standard server with standard hardware, just with a lot if disks and some specialized software. All the storage logic, RAIDs, replication and journaling is in software now. We blur the storage/server border even more with smart cache drivers and server side flash caches. Where the server ends and storage begins?

From other side we see pure software storage systems, never sold as hardware, which do not have hardware storage heritage and architecture traits. Take any server, put some disks and RAM as you please, install an application and voila! You lack space, performance or availability? Take another server, and another and maybe a couple more. It begins to look even more interesting when we install storage software in virtual machine or even make it a hypervisor module. There are no servers and storage apart - this is a computing/storage unified system. Hyperconverged infrastructure we call it. Virtual machines are running inside, virtual desktops and servers. More than that, users can not tell if they're in dedicated VM or terminal server session or is just a streamed application. But who cares when you can connect from MacDonalds just across a globe?

Today we talk about containers, but it's not a technological breakthrough. We knew about them for years, especially ISPs and hosting providers. What will happen in near future - is a merge of traditional full virtualization and containers in a single unified hypervisor. Docker and their rivals are not yet ready for production level corporate workloads. Still a lot of questions in security and QoS, but I bet it's just a matter of couple of years. Well, maybe more than a couple, but 10 is more than enough. Where was VMware 10 years ago and where are we now in terms of server virtualization?
Network control plane is shifting more and more towards software, access level switching blurs more and more. Where is your access level when you have 100 VMs switching inside a hypervisor never reaching physical ports? The only part really left for specialized hardware is high speed core switches or ultra-low latency networks like Infiniband. But still, this is just a data plane, control plane lives in the Cloud.

Everything is moving towards the death of general OS as we know them. We don't really need an OS actually, we only need it to run applications. And applications are more and more shifting from installable to portable containers. We'll see hypervisor 2.0 as new general OS and further blur between desktop, laptop, tablet and smartphone. We still install applications, but we already store our data in the cloud. In 10 years container with application will be moving between desktop, smartphone and server infrastructure as easy as we move now virtual machines.

Some years ago we had to park floppy drive heads after we're finished, teenagers of today live with cloud, teenagers of tomorrow will have to work hard to realize what is application/data link to hardware.

воскресенье, 15 ноября 2015 г.

Virtual DataCenter Design. Host Sizing. 1

Hyper-Threading

A lot of times I see two contradictory approaches on how to include HyperThreading to sizing. You can just ignore HT at all, and calculate CPU power based on “real” cores, or you can double CPU power because you see twice as much logical CPUs. Unfortunately they’re both wrong. In general you can safely assume HT as +25% CPU, but with some ifs. The main IF is that HT makes most of the efficiency on workloads with a lots of small VMs. Let’s assume we have a dual-way host with 12-core CPUs. That means 24 “real” cores, or 30 “effective” cores. But 24 vCPUs would still be the widest VM you should place on that host.

1 or 2 or 4

Should we choose 2-way or 4-way servers for virtualization? Pretty common problem. So how to solve it?

Target workload is the answer 100% of the cases, and fault domain size. Fault domain in this case is one host. Let’s assume we have a target workload of 200 VMs that can be handled by 10 dual-way hosts with 256GB of RAM each. It’s not a rocket science to deduct that 5 4-way hosts with 512 GB of RAM can handle this workload as well. But! In case of HA event we’ll have double number of VMs affected with services, double number of VMs to restart, increased load on SAN. We have 20% of resources gone from cluster instead of 10% for dual-way servers. And that’ not all, for the planned maintenance we have to evacuate twice as much VMs which leads us to 2-2,5 times more evacuation time.

Latest improvements in CPU design bring us unprecedented number of cores per single socket, making 2-way vs 4-way quarrel even more complicated. Why the hell not 1-way servers with 16-18 cores CPU?

Conclusion. 4-way servers should be used in such cases as:
- Need to use Xeon E7 CPUs instead of E5. But still we can use 4 way servers with 2 sockets filled.
- Need for ultra-wide VMs size of full 2-way server or more, without possibility of application level clustering to split the load.

CPU choice

How to choose CPU in the family – that’s the question. You can try to stick your finger in the sky, use the coin or even a prophecy. But still most of the choices are made without real thinking. Let’s take a view on some basic aspects.
- NUMA. All the current x86 CPUs have the NUMA architecture. So each CPU has it’s own memory instead of globally shared. You have to access another CPU’s memory through the owner which is slower. But that’s not actually the problem, it’s just an issue to consider. Real problem is that you cannot install only half of maximum memory if you don’t install second CPU on 2-way server.
- CPU price, and price per core.
- Software licenses cost.

Let’s assume we chose Intel R1304WT2GS as basic platform for virtualization. So now we should to decide number of CPU to achieve maximum performance / cost efficiency. We are planning to install vSphere Standard ($1940), Windows 2012R2 Datacenter for VMs ($6155 / 2 CPUs) and Veeam Backup Standard ($1116).
Hardware costs $7970 for 1 socket configuration (E5-2670v3 12c + 192 GB), and we need licenses for 1 socket, which would be another $6133 resulting in approx. $14100 total.
2 socket configuration with the very same performance characteristics would cost $6890 (dual E5-2620v3 6с + 192 GB), but software costs double and result is $19160. So with the very same performance characteristics 1-way server is cheaper and a little bit faster due to just 1 NUMA node. Disclaimer – this is not 100% correct calculation, but a mere example of way to calculate.

So why do we need 2-way or even 4-way systems?
- 2 way servers have twice as much maximum memory.
- Single CPU is not enough for really highly loaded applications.
- As a general rule the more cores CPU have – the less is frequency. For an application with core-based licensing such as Oracle DB that can be an issue. For example, 2*E5-2637 v3 (4с, 3.50 GHz) and 1*E5-2667 v3 (8с, 3.20 GHz) have 10% difference in peak performance (28GHz vs 25,6GHz) while core-based licenses would remain the same.

General recommendations:
- Multi-core CPUs are well suited for loads with a lot of lightly to medium loaded VMs. More cores are better.
- The wider VMs are used – the better they perform.
- If you have only a few VMs with applications known for their love for MHz instead of good parallel procession – stick to higher frequency CPUs with less cores.
- Always start you design with single CPU servers. Add second CPU only after a minimum of 5 hosts reached (except of vSphere Essentials) or when you really need more memory that single CPU can support. Or leave second socket for a future upgrade.