IBM Unleashes the Power of Granite Code LLM Series
The AI coding revolution is in full swing, with new coding assistants emerging left and right. These assistants are designed to help developers debug, write better tests, autocomplete, look up documentation, and even generate full blocks of code. The question on everyone’s mind is: will AI ultimately replace the programmer? We addressed this concern in a previous article, concluding that software engineering will never die, but rather adapt and incorporate technologies like generative AI.
AI coding assistants are changing the game
Behind the scenes, these coding assistants rely on one or multiple Large Language Models (LLMs) to perform their magic. For instance, Tabnine uses Mistral, GPT-3.5 Turbo, and GPT-4.0 Turbo, while Copilot uses Codex and GPT4. IBM is now introducing its own decoder-only code models, dubbed the Granite Code LLM Series, as part of its Watsonx Granite collection.
IBM Watsonx Granite
These models have been trained on code written in 116 programming languages and range in size from 3 to 34 billion parameters. They are perfect for code generative tasks, such as fixing bugs, explaining code, documenting, and writing code, as well as complex application modernization tasks. The Granite family consists of models ranging in size from 3 to 34 billion parameters, in both base model and instruction-following model variants.
Code generation made easy
The results of benchmarking these models are encouraging, with strong performances on code synthesis, fixing, explanation, editing, and translation, across most major programming languages, including Python, JavaScript, Java, Go, C++, and Rust.
Benchmarking results
In conclusion, the Granite Code LLM Series is a powerful tool that has the potential to revolutionize the way we approach coding. With its ability to handle code generative tasks with ease, it’s an exciting time for developers and programmers alike.
“The future of coding is here, and it’s powered by AI.” - Anonymous