From email, to word documents, presentations and spreadsheets, Microsoft’s suite of productivity software is an essential tool for most businesses. With Microsoft 365 being cloud-hosted it offers the ability to collaborate from any location. This flexibility comes at a cost, where it presents a new set of challenges to monitor and manage the end user experience. Organizations who are heavily dependent on Microsoft 365 simply can’t afford to not have visibility into what users are experiencing. Join us for this session of #ITConnections where we will discuss this popular tool used in remote work and how to ensure a positive user experience in your organization.
Joining us for our session was Olivier Raynaut, VP Client Delivery at Martello and Rob Doucette, VP Product Management at Martello.
With most of the world working from home these days, what should people be thinking about to ensure an optimal Microsoft 365 experience for those employees?
Rob: I think if we look at the situation that’s driving this behaviour change is that organizations and specifically IT teams have lost control and visibility into the technology stack underneath of Microsoft 365. In the past you may have been monitoring pieces and components in your branch offices, that doesn’t exist anymore and the only thing that is left to be able to look at is the user experience. Traditional monitoring tools are still very component, computer, device-centric in their monitoring strategy and don’t necessarily cover the level of user experience monitoring that organizations are looking for. So that’s driving a shift in change with monitoring where we’re seeing an increased focus on technologies like synthetic transaction monitoring and real user monitoring to better understand what the user is doing, how well are they able to access the service and applications that they need to be productive. There’s the other angle where the legacy tools like application performance monitoring (APM) and network performance monitoring (NPM) tools still play a very important part in that picture because they can be used in conjunction with synthetics and real user monitoring to understand where that problem is.
How can Martello help ensure that hybrid organizations deliver optimal services to their employees?
Olivier: Not everything is in the cloud in between the user and the cloud service. It’s key to get the entire picture and to have visibility across each of those components. Many large organizations still some on-premises exchange servers for example, everybody has federation to have control over the authentication and federate throughout the various applications. In large enterprises, a lot of users even though they’re connecting remotely or they are in various offices they typically have their connection moving back to some headquarters, going through proxies or at least firewalls, VPN into the headquarters, so it’s critical to have visibility from those places and understand each part of the journey or the route to the cloud.
What exactly are Gizmo’s “Robots” and how do they help the end user’s experience?
Olivier: The goal is to capture the end user experience and there’s really only one way you can do that accurately and 24/7 is by simulating that experience and act as user. The idea Gizmo is to really simulate that experience. So we have what we call the Gizmo robots, they are probes that you deploy anywhere in the world. They run on Windows machines and once deployed you can easily control them from a central location and tell them to do 24/7 what your users will be experiencing. Then these robots will act like users and simulate the clients so they’re not just doing browser simulations, they’re actually simulating the Outlook client, the Teams client, leveraging the same protocols as your end users in order to capture the experience and the time it takes to perform any of the actions. This provides that granularity so it’s not just how long it takes to connect and reach the cloud; it’s really how long it takes to send an instant message for the other person to read it, receive it, what is the quality of an audio call that is taking place, etc.
Microsoft already supplies customers with a tools such as the “Call Quality Dashboard” and the “Service Health Dashboard”. Why would I need a product like Martello Gizmo?
Olivier: The Service Health Dashboard provides good visibility inside of the data center, what Microsoft is monitoring and knows. They will publish information letting you know that your users may be impacted based on what they have monitored on-premises. The Call Quality Dashboard is really good to support specific use cases. For example, when your CEO or VIP users are calling the IT department or Helpdesk about specific issues, the Call Quality Dashboard allows you to go in there, look at that specific call and look at the user environment, like what was his device for example, if they’re using a bad headset that could have some impacts on the voice and that’s something you can see from there. So I think it’s a great tool to support that type of use case, but unfortunately it’s really focused on specific user calls and it does a good job there, but that’s it. It’s not helping you to have visibility on the service that’s being delivered.
Identifying an issue is only the first part of the troubleshooting process. Explain how Martello is able to, not only pinpoint the issue, but also provide a holistic view of the total impact to the business.
Rob: When we are looking into details of why there might be a problem, having a deep discovery and understanding of the path between the user and the service they’re trying to access is critical. There are many things that could go wrong between the user and that service. There’s local WiFi, the end point might be overworked or it might be impacting the ability to get to that service, DNS may be taking you to undesirable locations to get to your path to the cloud, so there’s lots of different factors that are outside of Microsoft’s data center but have a significant impact on your experience when you’re accessing these services. We also see a lot of value in understanding that path in chunks or segments. We can think of it as the local network to the user, whether that’s your WiFi, your LAN, your SD-WAN implementation and MPLS and then we go into your Internet Provider, we get into the backbone and then we eventually get into Microsoft’s data center. Having the knowledge to where the problem lies into one of those segments, really helps people understand who’s to blame and what they can do about it. Correlating this data into iQ provides us with immediate business impact so we understand which locations are impacted, which users are impacted, are there users that are actually having problems at this current moment and then being able to roll that into service level agreements and service level agreement reporting gives organizations the ability to see how well is the service being delivered to their customers.
In March 2017, due to an intermittent Office 365 outage, users all over the world had difficulties with accessing their OneDrive, Skype for Business and Outlook accounts. In June 2017, another Office 365 downtime was registered across Europe and the USA. In September 2017, European subscribers faced the Exchange Online outage. More recently, on February 3, 2020 Microsoft had an issue with Teams where users weren’t able to access the application. This resulted in lost productivity and frustration to business that are heavily reliant on the Microsoft application. So with that said, how could Martello products help mitigate the severity of these issues for its customers?
Olivier: Typically what we see with our clients is that the issues are under the control of the organization. Whether they are, or they’re not, the most important thing is to detect. It’s all about detecting, diagnosing and resolving / fixing /optimizing. You can only do those steps, as good as the previous step. It’s all about being the best at detecting. I see that as a very top-down approach, I think what’s critical is to understand what’s coming out of the cloud providers, in this case Microsoft. Thanks to the monitoring that’s in place 24/7, as early as there’s an impact for your users, in your tenant we’ll be able to detect it. So it’s really critical to have this synthetic transaction monitoring in place so that you can have this productivity. You don’t need to wait for users to complain for you to be aware of an issue.
Once you have detected the issue you understand its scope, the impacts, who is impacted within the organization. You’re able to take action. Even if it is out of your control, you can still do a lot to communicate these issues and to prevent a wave of support tickets, frustration that might take place and you can offer a possible alternatives to the end users.
It’s all about diagnosing the information and this can only be as good as the data that you’re already retrieving. You have some great tools already in-house, so correlating that with synthetic transactions through Gizmo, for example, and your other products, will allow you to understand where is that issue taking place and what you can do to mitigate it.
It seems like Digital Experience Monitoring is something that people are talking more about. What is Digital Experience Monitoring and how do products like iQ and Gizmo fit in to the DEM solution?
Rob: Fundamentally, what digital experience monitoring focuses on is user-centric monitoring. Understanding how is the user interacting with these services and applications regardless of what application and infrastructure components may be doing. There’s really three key pillars within the digital experience monitoring market. We have real user monitoring that tells us when an actual user is having a problem. This is a very reactive model, a user is already unhappy and angry and demanding a fix, and you’re kind of scrambling to try and resolve it quickly. That’s why we talked a lot about synthetic transaction monitoring where it provides us with that proactive view on if a user were trying to have a Teams call for example, it would not be great. So we have an opportunity to proactively be notified that there’s a potential issue and try and solve that. The last point is around end-point visibility. Really understanding from the exact perspective of the user and the user’s device. Gizmo and iQ both compliment eachother quite well, where Gizmo provides that strength on detecting that there is a problem and then iQ gives us the ability to align how well those services are being delivered, to the business outcomes that these organizations care about.