Zemoso helped a Fortune 500 company accelerate the development and deployment of a solution to track emissions. This was achieved by processing a variety of data relayed through a ground-based sensor network and automating time-consuming processes at remote rigs and production centers, leading to immediate corrective actions.
Rising emissions are a major concern for EnergyTech companies around the world. The oil and gas sector is one of the biggest contributors to harmful emissions. Total indirect emissions from oil and gas operations constitute 15% of total energy sector emissions. Between the Paris Agreement and Biden’s executive order, the pressure is on, and intelligently managing emissions has become a top priority for all energy production and processing companies.
Zemoso co-created a solution on an expedited timeline that processes a variety of data.
Emissions reduced
Our clients love what we do:
We started with our proprietary version of Google Ventures Design Sprint and ran weekly sprints, delivering features and enhancements incrementally. Multiple scrum teams worked on different aspects of the project, with a Scrum of Scrums to keep things on track. The Zemoso pod was agile, and scaling up or down depending on the skills needed at any given point in the project.
We used a microservices architecture. Each service is independent. The frontend was built using in React and Redux. Local and remote data were managed via Apollo Client. GraphQL, an intermediate layer, was used to make deployments faster and easier. Material UI was used to develop user interface components. The backend was developed using Spring Boot and Java. Drools Rule Engine was used to specify the action that needs to be taken if a particular condition is met. RabbitMQ served as the message broker.
Third-party integration with Cesium, a geographic information system (GIS) API was used to pinpoint the location, which was integral for proactive action.
Test automation was conducted using Cucumber and Selenium; Cucumber enhances automation testing, while Selenium ensures accuracy and speed. We also automated API testing.
Zemoso utilized message queuing and telemetry transport (MQTT) protocol to connect remote devices. We ensured that the images could be resized and the system could ingest data from globally dispersed devices.
Following DevOps best practices, we implemented CI/CD using Jenkins and GitHub (both locally and remotely). We also used containerization to futureproof the product and lay the groundwork for faster service deployments in the future.
We used a content delivery network (CDN) to deliver data and videos quickly with low latency. We used Amazon Simple Notification Services (SNS) for messaging and AWS Lambda to respond to new information and events. We also set up the system in a way that onsite resources were connected to the cloud infrastructure and it could easily handle sudden changes in data traffic (handling millions of requests per second).
We used PostgreSQL and Redis to ensure sub-millisecond response times, enabling millions of requests per second. Amazon S3 was used to store generated reports and images.
Notifications: The sensors at the node station capture live readings and send the data within seconds to Amazon Web Services (AWS). This data is then sliced, diced, processed, and indexed based on similarities or discrepancies. A supervised Machine Learning (ML) model and an unsupervised ML model generated notifications. We also helped deploy rules to classify the notifications based on priority.
P.S. Since we work on early-stage products, many of them in stealth mode, we have strict Non-disclosure agreements (NDAs). The data, insights, and capabilities discussed in this blog have been anonymized to protect our client’s identity and don’t include any proprietary information.
©2024 Zemoso Technologies. All rights reserved.