News

Miro Leverages Amazon Bedrock for AI-Powered Bug Routing, Reducing Resolution Time from Days to Hours

Miro Leverages Amazon Bedrock for AI-Powered Bug Routing, Reducing Resolution Time from Days to Hours

Miro, an AI-powered innovation workspace serving over 95 million users globally, faced a significant challenge in its developer experience: efficiently routing software bugs to the correct teams. Inefficient bug routing led to unnecessary context-switching, developer frustration, and extended time-to-resolution, with a substantial percentage of bugs missing internal resolution SLAs due to misrouting and repeated reassignments. This issue resulted in an estimated 42 years of cumulative lost productivity annually.

To address this, Miro partnered with the AWS Prototyping and Cloud Engineering (PACE) team to develop BugManager, an AI-powered solution for automated bug triaging. Leveraging Amazon Bedrock, this initiative dramatically improved Miro’s bug routing accuracy, achieving six times fewer team reassignments and a five-fold reduction in time-to-resolution, transforming resolution from days to hours.

Automating bug triaging in Miro’s environment, which involves nearly 100 engineering teams, presents a complex multi-class classification problem. Bug reports are often unstructured, lacking context, and contain diverse data types, including text, stack traces, screenshots, and videos. Achieving high-accuracy classification requires augmenting these reports with relevant product information from various sources like GitHub pull requests, Confluence documentation, and previously resolved tickets.

Furthermore, the organizational structure is highly dynamic; teams merge, new teams form, and product responsibilities continuously evolve. Traditional natural language processing (NLP) based text classifiers, such as fine-tuned BERT models or fine-tuned large language model (LLM) classifiers, struggled in this environment. They necessitate frequent retraining with organizational changes and depend on labeled data that might be unavailable for new structures. Miro experienced rapidly degrading performance with a previous solution based on a fine-tuned GPT model, prompting the shift towards a more robust, LLM-powered approach.

↗ Read original source