I don’t recall the title, but I read a book once that outlined how the Industrial Revolution changed the world and made the United States boom economically.
I imagine someday in the future, someone like me will read a similar book about the Cloud and how it revolutionized the world.
Industrial Revolution and Electrification
Wikipedia outlines this time as:
“The Industrial Revolution was the transition to new manufacturing processes in the period from about 1760 to sometime between 1820 and 1840. This transition included going from hand production methods to machines, new chemical manufacturing and iron production processes, improved efficiency of water power, the increasing use of steam power, the development of machine tools and the rise of the factory system.”
In another book I read, the author outlined the advantages that came from Electrification. All factories used to be built a long a waterway, not only for water wheels to power belt-driven tools and automation, but it also provided a much needed distribution channel by way of boat to deliver good to and from the factories.
Electrification freed new factories from these constraints, allowing new factories to be built anywhere and take advantage of unlimited power needs by way of the Electrical grid. This allowed factories to enjoy economies of scale and produce good more efficiently.
The Information Age and the Cloud
Again, the formal definition from Wikipedia:
The Information Age (also known as the Computer Age, Digital Age, or New Media Age) is a period in human history characterized by the shift from traditional industry that the Industrial Revolution brought through industrialization, to an economy based on information computerization. The onset of the Information Age is associated with the Digital Revolution, just as the Industrial Revolution marked the onset of the Industrial Age.
We are all living in the Information Age, it’s really an amazing time to be alive! We’ve all watched as technology has transformed our lives. Miniaturization has enabled the creation of Mobile Phones, computers and IoT devices, making technology ubiquitous. We communicate differently, we share information differently and social media has become a driving force of social evolution.
I’ve a strong advocate of keeping my skills up to date in this industry! Over the last several months I’ve been studying to get my Azure Solution Architect certification from Microsoft. It’s a very hard certificate to earn, there is a lot of knowledge you have to have about Microsoft’s Cloud Azure. So far I’ve passed 2 of the 3 exams:
I still have one more exam to take to get the Azure Solution Architect certification. My point here is I’ve taken a lot of time to learn more about the cloud.
I’ve been using Windows Azure for several years now, at Solid Cloud (my company) we design, develop, service and support custom applications for our customers in the cloud.
Since we’ve been at this a while, I’ve started to see the Cloud itself change in functionality and the way we design applications is changing. Let me outline how cloud computing is changing.
Software companies large and small have been utilizing virtualization either from VMWare or Microsoft’s Hyper-V infrastructure for the last decade. Virtualization allows developers to get more process space out of the physical hardware they have purchased. You can treat one physical machine as multiple machines and even snapshot and manage the state of these machines in a very flexible way.
Most developers today use the cloud and have for a few years now. Most start by setting up Virtual Machines in AWS or Azure so that they no longer have to worry about the physical hardware. We are already used to virtualization, so why not move the machine to the cloud. There isn’t really a difference, we just don’t have to worry about the hardware anymore.
Virtualization allows you to create machines within a machine.
So, you’ve gained the ability to scale your hardware to multiple isolated environments. As you can see, our focus shifts from hardware, to virtualized hardware. Most modern applications don’t use many resources on a user’s computer any more. Many applications are no longer installed, but just consumed via a web browser or mobile device.
IaaS vs PaaS vs SaaS
The scenario I outlined above, virtualization, when applied to the cloud becomes IaaS (Infrastructure as a service) This means Microsoft’s cloud provides us the ability to create all types of virtual machines from web servers, to media servers, database servers and more. Now we don’t have to buy any hardware and can pay for these services on-demand.
PaaS (Platform as a service) differs from IaaS in that we focus on platform based offerings. Storage, Load Balancing, Security, Web infrastructure, etc.. all capable of running parts of our application, but we don’t have to worry about the virtual machines or the hardware infrastructure at all for that matter! We can create true “Cloud applications” that are solely comprised of Paas services, and don’t require any IaaS infrastructure at all!
SaaS ( Software as a service ) can provide complete cloud hostable software applications that many 3rd party ISV vendors can provide to businesses. The SaaS model allows customers to utilize modern software platforms without needing to worry about the infrastructure needed to provide these services.
Since I’m trying to make a point about how the nature of software development is changing, lets stay focused on the differences between IaaS and PaaS. To me, this is where the cloud really starts to offer advantages for application development.
I’ve been a software engineer since the mid 90’s When I started programming, there was no internet. We used dial-up BBS’s and shared games, etc with floppy disks.
The applications I’ve developed over the past 20 years all followed a similar architecture, outlined as a 3-tier architecture.
We designed applications in this way because of how the applications would scale. If we needed to scale this design, we could do so by scaling out (more machines) or scaling up (more powerful hardware) whichever tier of the application was suffering. We can even add caching mechanisms to help increase application throughput.
This design is still modeled around the hardware on which we are running. Each of these “Tiers” typically runs on a virtual machine. Scaling out the Web tier, we add more machines. Scaling the Business or data tier we scale up first as scaling out at this level requires more effort.
The application design outlined above can definitely scale and service the needs of many customers. I’ve built many applications based on this architectural pattern and they work great! They are all limited in their ability to scale however, at some point there are limits!
These apps are all built with some “expectation of traffic usage”
- Can this app handle 10 users at once, sure…
- Can this app handle 100 users at once, sure…
- Can this app handle 1,000 users at once, yes, but we need the load-balancer now and have to scale out the web tier.
- Can this app handle 10,000 users at once, probably not. The main bottleneck will become the Business and Data tier. This is where data concurrency needs to be maintained. Relational style databases allow for the most efficient storage of data, but not the most efficient querying of data at scale.
To maintain accuracy, relational databases have to lock data while it’s accessed and updated to ensure atomicity, but in doing so cause a bottleneck due to the locks.
To avoid this problem, new types of storage systems have been created. A No-SQL database can work much faster. Redis, No-SQL, Azure Document DB are all examples of these new storage mechanisms. They change the way data is stored and accessed so there are no more locks and concurrency can be maintained.
They accomplish this by “de-normalizing” the data storage, storing data in a “document” vs a series of related tables. This does help with speed, but sacrifices normalization, leading to more duplicated data. This is the needed trade-off however, this eliminates the “database locks” we experience trying to scale our 3 tier application.
Now imagine building a system where you need to support 10,000 IoT devices that are all connected to your application and are uploading telemetry data every second. Quickly the model we’ve used to build applications for the last few decades will no longer scale effectively.
Not only is there a scalability limit to the 3 tier application design model, it also makes updates to the application a difficult process that requires you update the entire tier every time you have updates.
This leads to less frequent updates that need to be tested to ensure functionality, many times needing to test large areas of the application.
Microservices is a new term to describe a cloud application development model where the components can still be logically grouped into a “tier” from its responsibility level but separated into a “micro-service” that can be updated and scaled independently.
These “Microservices” can then be scaled independently based on the demand a particular service is under at any given moment. Each component can be versioned and upgraded independently.
These types of applications are now possible utilizing Container solutions like Docker, Docker Swarm , Kubernetes, Azure Service Fabric or MesoSphere DC/OS. Each of these provide a “Cluster of Vm Nodes” in which the orchestration service can host your application’s microservices.
These technologies leverage Azure Container Service to provide a microservice’s operating environment and make managing a cluster of services easier.
Each microservice agrees to provide information about it’s health to the container orchestration service. This way the service can effectively manage the scale and updates of each micro-service.
COM+ and DCOM returns?
Utilizing a “micro services” architecture is not a “new concept”, In my 2001 copyrighted COM+ book (not sure why I still have this book)
An advantage of COM+ was that it could be run in “component farms”. Instances of a component, if coded properly, could be pooled and reused by new calls to its initializing routine without unloading it from memory. Components could also be distributed (called from another machine). COM+ and Microsoft Visual Studio provided tools to make it easy to generate client-side proxies, so although DCOM was used to make the remote call, it was easy to do for developers. COM+ also introduced a subscriber/publisher event mechanism called COM+ Events, and provided a new way of leveraging MSMQ (inter-application asynchronous messaging) with components called Queued Components. COM+ events extend the COM+ programming model to support late-bound (see Late binding) events or method calls between the publisher or subscriber and the event system.
Typical COM components would receive an event from a Message Queue or be invoked through MTS. These COM+ services typically would access the database, perform some types of transaction and respond with their own message or event.
So whats changed? How are microservices different / better than COM+? In a nutshell, the context has changed to the cloud!
A New Context – Microservices in the PaaS layer
Modern Microservices can take advantage of the service offerings from the PaaS layer. Storage, Messaging, Load Balancing, Scale are all benefits gained by providing microservices access to PaaS services.
In this new context, microservices can scale almost infinitely!
These systems can scale to establish bi-directional communication with billions of devices!
There is no such thing as Infinite scalability. Processes eventually run on physical hardware which has finite limits. However, these limits no longer constrain application design!
Data, Data, everywhere!
Bryan Krzanich the CEO of Intel recently discussed how Intel is changing focus to support this new ecosystem of devices.
“There are five core beliefs that I hold to be undeniably true for the future.
- The cloud is the most important trend shaping the future of the smart, connected world – and thus Intel’s future.
- The many “things” that make up the PC Client business and the Internet of Things are made much more valuable by their connection to the cloud.
- Memory and programmable solutions such as FPGAs will deliver entirely new classes of products for the data center and the Internet of Things.
- 5G will become the key technology for access to the cloud and as we move toward an always-connected world.
- Moore’s Law will continue to progress and Intel will continue to lead in delivering its true economic impact.
Our strategy is based on these premises, and the unique assets that only Intel brings to them. There is a clear virtuous cycle here – the cloud and data center, the Internet of Things, memory and FPGAs are all bound together by connectivity and enhanced by the economics of Moore’s Law.”
Intel CEO Brian Krzanich has predicted that by 2020, the Internet of Things will include 50 billion devices and each user of those gadgets will generate 1.5 gigabytes of data every day. But, he has said, the average autonomous car will create about 40 gigabytes of data each minute.
How can we possibly collect and process that much data? We can’t without changing the way applications are designed.
THIS A GAME CHANGER!!!
The reason I talked earlier about the Industrial revolution and outlined a timeline of technology progress is I see this as a huge shift in application design. Similar to the way Electrification changed the way factories were built, allowing economies of scale to be developed with infinite, on-demand usage of power. The same is true for the Cloudification of Microservices (if that’s a word)
My entire career in computing, I’ve seen the transformation not only in the work I do day to day, but the underlying platform in which we build applications shift.
The shift to Microservices in the cloud taking advantage of PaaS services is truly a game changer!
Microservices in the cloud now provide companies almost infinite resources in the cloud, on-demand. In the future, I think people will look back to this moment in history and see it had profound implications on the future.
Businesses today can’t live without information. Information helps businesses provide better customer service, more efficient and cost effective operations. This helps businesses to grow and deliver value to their shareholders.
Companies that takes advantage of these new application design paradigms will be able to more efficiently make use of the compute resources available to scale and grow their businesses.
For companies that don’t take advantage of these services, the world will become harder and harder to do business in.
A business’s technology needs to be flexible, adaptable, and cost efficient to maintain and enhance value. If a company can’t change, or change takes too much time and resources they will surely fail to deliver on the value expected by their customers.
Miniaturization – https://en.wikipedia.org/wiki/Miniaturization
Information Age – https://en.wikipedia.org/wiki/Information_Age
Electrification – https://en.wikipedia.org/wiki/Electrification
By John Aplessed – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=12351968
By Myworkforwiki – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=48993010
By Sam Johnston – Created by Sam Johnston using OmniGroup’s OmniGraffle and Inkscape (includes Computer.svg by Sasa Stefanovic)This vector image was created with Inkscape., CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=6080417
Microservices: An application revolution powered by the cloud
COM+ programming with Visual Basic – Copyright 2001 O’Reilly and associates.