As more organizations shift to the cloud, their security teams are attempting to keep pace. The use of Amazon Web Service (AWS) and Microsoft Azure has skyrocketed in the last few years. This blog post will discuss some challenges that must be overcome when developing monitoring solutions for the cloud. It will focus on AWS and Azure as those are currently the market leaders in this space and being adopted by most organizations. While those two are the focus of this article, many of the topics discussed may be applicable to other cloud-based solutions as well. 

The first step in monitoring a cloud service is determining how to get the logs out. Does the vendor even expose the logs and make them available to the client? Is there an Application Programming Interface (API) that can be interacted with to retrieve the information? Or is there some agent or code that is provided by the vendor that can be run in your environment that retrieves the logs for you and sends them to a designated destination? Under Azure Monitor, Microsoft has standardized on using Event Hubs to send data to outside monitoring resources. 

These Event Hubs allow a client high speed access to a variety of different logs, using a single interface. Many resources in AWS store the data in an S3 bucket for retrieval and processing. Even if the vendor does expose the logs, are they in real time? Some vendors have a lag from the time the log is generated by the service to when it is available to the client. It is also important to understand which service and logs are made available by the vendor. Many times, vendors only expose specific types of logs or have a roadmap for when certain logs will be made available. This has been the case many times with Azure including making Azure AD and SQL logs available. At the time of writing, Azure SQL and AD were still in private preview. Further Azure sign-in logs were unavailable in private preview, only admin logs. This is important as the sign-in logs provide a high level of granularity as to who is attempting to access the environment. 

Once it is determined the logs are available and can be retrieved, you have to look at if you can differentiate the logs. Are all the logs stored in one place, or are they separated out by service or type? Having some type of identifier or the logs stored in different locations allows you to only collect the logs that you are interested in. It also ensures you are able to differentiate between different devices in the same service. As the logs are processed by a system, this type of identifier allows the system to properly parse the logs and inventory where logs are coming from and the count of devices or services.

While retrieval may seem trivial, there can be challenges that must be overcome. One is the naming convention and the depth of directories. As can be seen in examples one and two below, vendors often do not follow a standard directory naming convention. This makes it challenging when developing software that has to take all of these different variations into account. While some of these naming conventions do make sense, there can be some that are difficult to explain. Sometimes the vendor provides documentation on these naming conventions, but sometimes they do not. To complicate this more, different services may have different structures. In most circumstances, the software that does the collection needs to be intelligent enough to account for all these different levels, names, and formats, and give the user the flexibility to select the logs that should be collected. 

Example 1

Azure example directory structure:




Example 2

AWS example




This concludes the challenges an organization may face when attempting to retrieve logs from a cloud environment but look out for part two of this post, which will cover the format and syntax of the logs and the actual data elements.