Although this is vastly simplifying the world of today, it is a (although terribly drawn) depiction of how applications work in theory. There is an end user that communicates over some network to some data center. APM strategies in general focus around understanding this transaction through data collection points at critical components and deliver a picture of this flow.
Over my years in this space, I have come to realize there are many great ways to get visibility into the transaction. In each area I always had the confidence to make suggestions in improving performance at any point in this diagram except one; the public internet. Do not get me wrong. There are a ton of tools like Thousand Eyes or Outage Analyzer to determine WHERE a problem exists, but I have never seen a great strategy that SOLVES this problem.
I was extremely happy at Dynatrace. I was 26 years old, working in the Bay Area and regularly meeting with the logos that are associated with Silicon Valley. In the past two years I lost less than 10% of opportunities to competition. I was becoming the new technology SME and had opportunities like presenting at Docker Meetups. I was having meetings with Site Reliability and Application Performance Engineers who are well known in the development and deployment world. I was not the only one feeling the success of the product. Dynatrace as a company is wrapping up another stellar year with the announcement of the unification of Ruxit and Dynatrace. The organizations market share is still number one and growing. The future looks extremely bright for Dynatrace and I was looking forward to enjoying the ride. Then, in steps Teridion.
Optimizing the Pipe and Not Building Water TowersAs my experience with Dynatrace grew, so did my understanding of just how complex applications could be. A good APM solution will be able to tell if the problem is on the end user device, inside the data center (the cloud is just someone else's data center) or in the network between the two (most often the public internet). I could have significant performance improvement conversations with customers who were interested in fixing problems on the end user device, or inside their data center. However, the public internet only had one solution that I knew of; use the internet less. The only way I knew how to improve the performance of the internet was to enable caching, pushing content out to CDNs, adding more data centers closer to end users, or build out a private fiber network. Most large sites utilize a CDN to address this concern since it is the easiest "fix."
Content Delivery Networks (CDN) are a way to help reduce this problem. They are great for storing cacheable content near end users. However, the internet is moving more and more towards individual experiences. Pages are comprised of more dynamic content and less static content. For example; when I look at a news feed, I click on titles that interest me, read a few sentences and immediately go to the comments section. Any item revolving around the personal experience will be a challenge for a CDN to deliver on. A better theory for delivery is needed for the modern web. This issue I was very aware of, but never had an answer. I did not know there was a better way.
Holding the Internet AccountableWhat do Comcast, Dish, and Sprint all have in common? If you answered they all represent last mile content delivery, you would be correct. If you also answered they were voted into the Top 10 Most Hated Companies in the US according to 24/7 Wall St, you would ALSO be correct. There is personally nothing worse (exaggeration) then coming home from a hard day, firing up Netflix and... the stream is not working. Who gets blamed? ISPs! Who's at fault? I DO NOT CARE I JUST WANT TO WATCH BOB'S BURGERS! ISPs are the face of the end users frustration. They are just the last leg of a long journey of content delivery that spans multiple handlers over many literal miles. They get blamed because the path is complicated and they are the face of that complication. Just how complicated can this path be? Here is a traceroute output from my terminal to google.com:
Eleven hops. Double digit points of failure that are mostly set statically and are comprised of multiple players. Teridion asked me a very simple question to start off the conversation; "Who controls the internet?" Unless you subscribe to conspiracy theories, you probably know that no one controls the internet. There are big players, but even when you go to Google there are at least half a dozen touch points that the traffic is routed through. Coming from the APM space, I knew the user to datacenter bandwidth was inconsistent, I just did not realize how much of a performance drag it was! Teridion showed me an elegantly simple demo:
"I don't believe you." This was my response to the demo. This also the exact response I wanted any potential lead to have when I was showcasing a product. The logic behind the solution makes sense as well. I had to get involved. I saw an answer for a problem that all the IT organizations worried about, but only had a band aid for the solution. How do you control the public internet? If you are Facebook or Google, you lay your own fiber. If you are not those guys, Teridion is the solution to provide reliability and performance.
Whenever I have a customer who says those four words stated in the title, I have to follow it up with a simple explanation of how it works. Whelp, Teridion is the Waze of the internet (the "Uber of ..." line died in 2015). The main protocol used to route traffic across the internet is BGP. Think of BGP like a paper map of the United States. If you need to take a trip from Washington D.C to San Francisco, you would plot out a course using the main highways most likely. The map cannot tell you about construction, congestion, weather or anything else that will impact your journey. BGP is older than me. I have never used a paper map to plan a road trip. Waze is a GPS application that also takes in current road conditions to plot out a course that will be the fastest. Similarly Teridion is proactively determining what is the fastest way to get from user to location of content by taking in multiple metrics from agents constantly testing the internet. This data is fed into a singular analytics engine which can then create HOV lanes for internet traffic. It is "elegantly simple."
Modern Routing for the Modern Web
Want to learn more? Can't believe it is true? Check out Teridion for more information