Zencoder Develops AI Coding Agents for Enterprise Adoption

Andrew Filev, co-founder and CEO of Zencoder, has been working on building AI coding agents since leaving his previous startup Wrike in 2023 to found For Good AI. Initially focusing on frontier AI research, Filev realized that a small startup couldn’t compete with established players like Anthropic and OpenAI. Instead, Zencoder focused on developing enterprise-ready AI coding tools.

The company’s inaugural product, Zencoder.ai, uses Anthropic’s Claude models as its core. Filev aims to automate routine work in development, such as writing tests, code reviews, and custom tasks. He believes that agents need access to context and feedback loops to verify results, which is essential for reliability.

Zencer has launched various coding agents, including a syntax checking agent, unit testing agent, and custom review agents. The company also allows developers to create their own agents and share them across organizations. A directory of user-created agents provides an additional layer of adoption.

Most recently, Zencoder launched Zentester, a tool for testing user interfaces and APIs using Anthropic’s computer use model. Filev sees the pairing of coding and testing as essential for production code. Alex Albert, head of developer relations at Anthropic, agrees that models have improved significantly with each generation, making agentic coding more viable.

Despite potential competition from established players like Anthropic, Filev isn’t worried, citing a rapidly commoditizing market and the strength of their partnership. Looking ahead, Zencoder aims to build sophisticated agentic pipelines that enable developers to work on problems while agents handle tasks.

Source: https://thenewstack.io/how-zencoder-is-building-its-coding-agents