ptnphotof - Fotolia
Red Hat eyes cloud-native Java future with Quarkus
Red Hat's Quarkus project aims to raise Java up to the cloud and update the popular programming language for cloud-based computing situations and projects.
Red Hat's latest initiative, Quarkus, aims to usher in a cloud-native Java future -- and shift the core of innovation in enterprise Java.
Numerous efforts over the years have attempted to make Java more cloud-native, such as Google's Dalvik virtual machine used in Android. None has demonstrated as much promise as Red Hat Quarkus, which is based on two Oracle-led projects, GraalVM and Substrate VM, to build cloud-native Java applications that are much faster and smaller, in a Linux container as part of a Kubernetes deployment.
Substrate VM, a subsystem of Graal, focuses on AOT compilation to collect Java to a native binary image, said Mark Little, vice president of engineering and CTO of JBoss Middleware at Red Hat.
With Quarkus -- initially known as Protean within Red Hat -- Red Hat wants to make Java a leading platform in Kubernetes and serverless environments, and offer developers a unified programming model to address a wider range of distributed application architectures, Little said.
Oracle ceded its stewardship of enterprise Java EE to the Eclipse Foundation in 2017. Red Hat, IBM and a few other companies took over innovation on the platform, while Oracle focused on the standard Java edition. Although several Oracle engineers remain contributors to the Graal and Substrate efforts -- and thus to Quarkus -- Red Hat leads this charge.
The enterprise Java community continues to work to make Java a good citizen in the cloud, including its efforts to optimize OpenJDK for Linux containers and Kubernetes. However, developers still have concerns about how well the JVM and Linux containers work together, from the amount of memory consumed by the JVM to boot time and general performance issues.
The standard HotSpot JVM, which Oracle still maintains and supports, has evolved into more of a first-class citizen for cloud-native applications, but it's not quite ready to be the de facto VM of choice in that environment, said Martijn Verburg, CEO of jClarity and co-lead of the London Java User Group.
"Red Hat and others are trying to look at a stop-gap measure until the VM research and implementation is complete at OpenJDK," he said. "Graal, which is experimentally in OpenJDK as an option, is certainly a technology worth looking at."
Write once, run anywhere
Java's selling point since its inception is that a developer can write a program once and run it anywhere. That worked because developers compiled Java down to bytecode to run on the JVM, which isolates the app from the underlying operating system. However, this takes up a lot of memory and bogs boot-up times.
Martijn VerburgCEO, jClarity
The tradeoff, however, is that the native binary can't just run on any arbitrary operating system. But in the cloud, developers typically don't worry that they'll run a binary on different operating systems because most of the time, they just target Linux containers.
In the majority of cases, developers win that tradeoff: An executable that is much smaller than the footprint they would have with the JVM and their original Java application will boot a lot faster, Little said.
"We're talking about boot times in the order of seconds prior to using something like Substrate, [which can deliver] boot times in the order of one to five milliseconds," he noted.
Red Hat's cloud-native Java push has a serverless piece
In addition to Graal and Substrate, Red Hat continues to build on its work in the serverless computing space, and provide a Kubernetes-native environment for enterprise Java developers to build apps and services for the Red Hat OpenShift container platform.
In a serverless environment, developers want to spin up their app or service quickly in response to an event, such as an incoming message or alert. The ability to quickly spin up and tear down services is important because developers not only want to do these things in real time, they also want to consume only the CPU and memory that they require as overuse affects their bill for cloud usage, Little said.
"I see this a little like the Kubernetes/Docker versus serverless/function-as-a-service paradigm," Verburg said. "I believe that the combination on the left is a stepping stone to the combination on the right."