IT Connections: How to Manage Reliable Delivery of Service in the Battle For Bandwidth
Right now, the load on cloud-based applications is at an all-time high, with people working from home accessing the same resources, simultaneously.
With the dramatic shift to remote work, we have seen daytime Internet usage skyrocket 34.4% between the typical working hours of 9 and 5. A large portion of that online traffic accounts for video conferencing and collaboration tools, both of which are bandwidth intensive applications.
With that much real-time traffic competing for bandwidth, there are bound to be slow-downs experienced by end-users. The impact is jitter and delay during video conference calls, dropped audio calls and unreliable access to the Internet.
We spoke with Rob Doucette, VP Product Development and Sebastien Tellier, Director of Channel Programs, from Martello to get insight into some of the best practices that IT administers can deploy to secure and manage reliable delivery of service.
Looking at the challenges remote work presents, how important is the user experience when it comes to accessing applications?
Rob: It’s become critical for applications and services that are delivered by the cloud so some of the traditional SAAS applications that employees might be consuming don’t rely on IT components that your IT teams may have control or visibility over. For example, as Microsoft Office has transitioned from on-premise deployments and branch offices where there might have been legacy network monitoring tools and application monitoring tools to make sure that all those pieces were up and running so that everyone can access their email and use those productivity tools, now these services and applications are hosted in Microsoft’s data center there’s nothing that the IT team really has control or visibility over from an infrastructure perspective, but they can manage and control the user experience. Having insight into how well a user is accessing a service or application in terms of latency, how long a page loads in Sharepoint and other things like that, becomes really critical because it’s the only thing that the IT team really is going to have visibility to as more of these applications transition to the cloud.
What other applications could be impacting available bandwidth on your network?
Rob: There’s quite a number. The ones that are most critical are some of the real-time services, that all of us are now becoming more accustomed to relying on to do our jobs from remote locations, in particular voice and video where small disruptions in a network can have a significant negative impact on the user experience.
Before a business looks to migrate their applications and services to the cloud, how would you assess if the network is able to support what they want to do?
Sebastien: Normally the first reflex people would have would be to look at if they have sufficient bandwidth going to the Internet because logically speaking, you have your end users accessing from their phones, computers and other platforms. Not all traffic is created equal, more bandwidth doesn’t mean you’re going to have low delay, low packet loss or low jitter.
You could run a free speedtest online that would measure bandwidth, but that bandwidth test will not necessarily tell you that everything is fine, it will only tell you that you technically have enough bandwidth and that criteria that’s guaranteed by your service provider is respected but it doesn’t tell you the quality of the connection. A better way to do it, is go to with a solution like Martello’s UCScore. You’ll use your browser and it will send simulated calls to identify and measure those different metrics. By measuring jitter, delay and packet loss, you can really establish how good your call would be. If you score high, you are likely ready to make the move, if you score low it would be recommended that you look at your overall network infrastructure.
After making the assessment of what the network can support, what do you look for next?
Rob: One of the challenges we hear a lot from our customers and prospects is understanding which applications are being used, so having a capability to discover what is being used can help understand what the load is on the network and what prioritization you need to look.
Being able to assess the capability of a new application running is great but you also need to understand what’s already there.
Now that everything is up and running, Sebastien which ways can you monitor the system to ensure that everyone is receiving a positive user experience?
Sebastien: Performance issues don’t just come out of no where, there are signs that you can look for. When you’re looking at a service in real time experience you want to test and simulate as much as possible the real life experience that end user is going to have. Since there are signs, you detect fairly easily if you know what you’re looking for. You wouldn’t get that figure by running a speed test, so you need to send actual data packets from your network environment through the same hoops and loops your session would go through. Running those synthetic transactions will allow you to send alerts to your staff to ensure the good quality of service.
Rob: I think there’s a great angle for one of our products Martello iQ which provides visibility for IT teams across the tools that they use. We have access to a lot of data from all of these tools, and this data becomes really valuable when we can look at it through the context of user experience problems.
In the case of a user who is having some problem accessing a cloud service, being able to go back and look at alerts that may have otherwise gone unnoticed in a monitoring environment, but now we can use it to explain why that user had a problem and potentially fix it. Having that data is super important to describe and fix any kind of potential user experience problems.
The other aspect that we see getting a lot of traction is around better visibility and more flexibility around service level agreements. But sometimes applications are showing healthy, the application is up and running, but users are having difficulty accessing it. It’s not that it’s not available, it’s that it’s performing poorly. Just to say it’s up and running isn’t good enough if people are giving up on trying to get to your SharePoint site, for example.
How will this new approach to bandwidth management play a part in digital transformation?
Sebastien: Everything is in the data. The more information you have the better the conclusions you can make, so if you can also automate those conclusions, you have yourself a winning scenario. So running those tests, collecting the information and getting to a point where the alerts are sent automatically. It provides you with the right tools to understand without having to be an expert in the field.