Article

For Complex Applications, Rust is as Productive as Kotlin

Published on 13 min read

    In this article, we will compare one apple (IntelliJ Rust) to one orange (rust-analyzer) to reach general and sweeping conclusions. Specifically, I want to present a case study supporting the following claim:

    For complex applications, Rust is as productive as Kotlin.

    For me, this is an unusual claim to argue: I always thought exactly the opposite, but I am not so sure now. I came to Rust from C++. I was of the opinion that this is a brilliant low-level language and always felt puzzled at people writing higher-level things in Rust. Clearly, choosing Rust means taking a productivity hit, and using Kotlin, C# or Go just makes much more sense if you can afford GC. My list of Rust criticisms starts with this objection.

    What moved my position in the other direction was my experience as the lead developer of rust-analyzer and IntelliJ Rust. Let me introduce the two projects.

    IntelliJ Rust is the plugin for IntelliJ Platform, providing Rust support. In effect, it is a Rust compiler front-end, written in Kotlin and making use of language-support features of the platform. These features include lossless syntax trees, a parser generator, persistence and indexing infrastructure, among others. Nonetheless, as programming languages differ lot, the bulk of logic for analyzing Rust is implemented in the plugin itself. Presentational features like completion list come from the platform, but most of the language semantics is hand-written. IntelliJ Rust also includes a bit of a Swing GUI.

    rust-analyzer is an implementation of Language Server Protocol for Rust. It is a Rust compiler front-end written from scratch with an eye towards IDE support. It makes heavy use of salsa library for incremental computations. Beyond the compiler itself, rust-analyzer includes code for managing long-lived multithreaded process of the language server itself.

    The projects are essentially equivalent in scope — rust compiler front-ends suitable for IDEs. The two biggest differences are:

    • IntelliJ Rust is a plugin, so it can re-use code and design patterns of the surrounding platform.

    • rust-analyzer is the second system, so it leverages experience of IntelliJ Rust for a from-scratch design.

    The internal architecture of the two projects also differs a lot. In terms of Three Architectures, IntelliJ Rust is map-reduce, and rust-analyzer is query-based.

    Writing an IDE-ready compiler is a high-level task. You don’t need to talk to the operating system directly. There are some fancy data structures and concurrency here and there, but they are also high-level. It’s not about implementing crazy lock-free schemes, it’s about maintaining application state and sanity in the multithreaded world. The bulk of the compiler is symbolic manipulation, arguably best suited for lisp. Picking a VM-based language for such task (for example, OCaml), doesn’t have any intrinsic downsides.

    At the same time, the task is pretty complex and unique. The ratio of “your code” vs “framework code” when implementing features is much higher than in a typical CRUD backend.

    Now that the projects are introduced, lets take two roughly equivalent slices of history.

    Both are about 2 years old, with 1-1.5 developers working full time and vibrant and thriving community of open-source contributors. There are 52k lines of Kotlin and 66k lines of Rust.

    Both delivered roughly equivalent feature sets at that time. To be honest, I still don’t really believe this :) rust-analyzer started from zero, it didn’t have a decade worth of Java classes to bootstrap from, and the productivity drop between Kotlin and Rust is supposed to be huge. But it’s hard to argue with reality. Instead, let me try to reflect on my experience building both, and to try to explain Rust’s surprising productivity.

    Learning Curve

    It’s easy to characterize Kotlin’s learning curve — it is nearly zero. I’ve started IntelliJ Rust without Kotlin experience and never felt that I need to specifically learn Kotlin.

    When I switched to rust-analyzer, I was pretty experienced with Rust. I would say that one definitely needs to deliberately learn Rust, it’s hard to pick it up on the go. Ownership and aliasing control are novel concepts (even if you come from C++), and taking holistic approach to learning them pays off. After the initial learning step the ride is generally smooth.

    By the way, this is the perfect place to plug our Rust courses and tailor-made trainings :-) The next introduction to Rust is happening this December!

    Modularity

    This I think is the biggest factor. Both projects are moderately large in terms of scope as well as in terms of amount of source code. I believe that the only way to ship big things is to split them in independent-ish chunks and implement the chunks separately.

    I also find most of the languages I am familiar with to be pretty horrible with respect to modularity. More generally, I am amused with FP vs OO debate, as it seems that “why no one does modules right?” is a more salient issue.

    Rust is one of the few languages which has first-class concept of libraries. Rust code is organized on two levels:

    • as a tree of inter-dependent modules inside a crate

    • and as a directed acyclic graph of crates

    Cyclic dependencies are allowed between the modules, but not between the crates. Crates are units of reuse and privacy: only crate’s public API matters, and it is crystal clear what crate’s public API is. Moreover, crates are anonymous, so you don’t get name conflicts and dependency hell when mixing several versions of the same crate in a single crate graph.

    This makes it very easy to make two pieces of code not depend on each other (non-dependencies are the essence of modularity): just put them in separate crates. During code review, only changes to Cargo.tomls need to be monitored carefully.

    At the time of comparison, rust-analyzer is split into 23 internal crates, with a handful general-purposed ones released on crates.io. In contrast, IntelliJ Rust is a single Kotlin module, where everything can depend on everything else. Although internal organization of IntelliJ Rust is pretty clean, it’s not reflected in the file system layout and build system, and needs constant maintenance.

    Build System

    Managing project’s build takes significant amount of times, and has multiplicative effect on everything else.

    Rust’s build system, Cargo, is very good. It’s not perfect, but it is a breath of fresh air after Java’s Gradle.

    Cargo’s trick is that it doesn’t try to be a general purpose build system. It can only build Rust projects, and it has rigid expectation about the project structure. It’s impossible to opt out of the core assumptions. Configuration is a static non-extensible TOML file.

    In contrast, Gradle allows free-form project structure, and is configured via a Turing complete language. I feel like I’ve spend more time learning Gradle than learning Rust! Running wc -w gives 182_817 words for Rust book, and 280_506 for Gradle’s user guide.

    Additionally, Cargo is just faster than Gradle in most cases.

    Of course, the biggest downside is that custom build logic is not expressible in Cargo. Both projects needs substantial amount of logic beyond mere compilation to deliver the final result to the user. For rust-analyzer, this is handled by hand-written Rust script, which works perfectly at this scale.

    Ecosystem

    Language-level support for libraries and top-notch build system/package manager allow for a thriving ecosystem. rust-analyzer relies on third-party libraries much more than IntelliJ Rust. Some parts of rust-analyzer are also published to crates.io for other projects to reuse.

    Additionally, low-level nature of the Rust programming language often allows for “perfect” library interfaces. Interfaces which exactly reflect the underlying problem, without imposing intermediate language-level abstractions.

    Basic Conveniences

    I feel that Rust is significantly more productive when it comes to basic language nuts and bolts — structs, enums, functions, etc. This is not specific to Rust — any ML-family language has them. However, Rust is the first industrial language which wraps these features in a nice package, not constrained by backwards compatibility. I want to list specific features which I think allow producing maintainable code faster in Rust

    Emphasis on data over behavior. Aka, Rust is not an OOP language. The core idea of OOP is that of dynamic dispatch — which code is invoked by a function call is decided at runtime (late binding). This is a powerful pattern which allows for flexible and extensible system. The problem is, extensibility is costly! It’s better only to apply it in certain designated areas. Designing for extensibility by default is not cost effective. Rust puts static dispatch front and center: it is mostly clear whats going on by just reading the code, as it is independent of runtime types of the objects.

    One small syntactic thing I enjoy about Rust is how it puts fields and methods into different blocks syntactically:

    struct Person {
      first_name: String,
      last_name: String,
    }
    
    impl Person {
        fn full_name(&self) -> String {
            ...
        }
    }

    Being able to see at a glance all the fields makes understanding the code much simpler. Fields convey much more information than methods.

    Sum types. Rust’s humbly named enums are full algebraic data types. This means that you can express the idea of disjoint union:

    enum Either<A, B> { A(A), B(B) }

    This is hugely useful in day-to-day programming in the small, and some times during programming in the large as well. To give one example, one of the core concepts for an IDE are references and definitions. A definition a like let foo = 92; assigns a name to an entity which can be used down a line. A reference like foo + 90 refers to some definition. When you ctrl-click on reference, you go to the definition.

    Natural way to model that in Kotlin is by adding interface Definition and interface Reference. The problem is, some things are both!

    struct S { field: i32 }
    
    fn process(s: S) {
        match s {
            S { field } => println!("{}", field + 2)
        }
    }

    In this example, the second field is both a reference to the field: i32 definition, as well as a definition of a local variable with the name field! Similarly, in

    let field = 92;
    let s = S { field };

    field conceptually holds two references — one reference to a local variable, and one reference to a field definition.

    In IntelliJ Rust, this is generally handled by downcasting special cases. In rust-analyzer, this is handled by an enum which lists all of the special cases.

    rust-analyzer is very enum-heavy, and there’s a lot of code which boringly matches over N variants and does almost the same thing. This code is more verbose then IntelliJ Rust alternative of special-casing specific odd cases, but is much easier to understand and support. You don’t need to keep broader context in your head to understand what special cases might be possible.

    Error Handling. When it comes to null safety, Kotlin and Rust are mostly equivalent in practice. There are some finer distinctions here between union types vs sum types, but they are irrelevant in real code in my experience. Syntactically, Kotlin’s take on ? and ?: feels a little more convenient more often.

    However, when it comes to error handling (Result<T, E> rather than Option<T>), Rust wins hands down. Having ? annotating error paths on the call site is very valuable. Encoding errors in function’s return type, in a way which works with high-order functions, makes for robust code. I dread calling external processes in Kotlin and Python, because it is exactly the place where exceptions are common, and where I forget to handle at least one case every single time.

    Fighting the Borrow Checker

    Although Rust’s types and expressions usually allow one to state precisely what one wants, there are still cases when the borrow checker gets in a way. For example, here we can’t return an iterator which wants to borrow from a temporary: utils.rs.

    When learning Rust problems of this kind are very frequent. This is primarily because applying the traditional “soup of pointers” design to Rust doesn’t work. With experience, design-related borrow checker errors tend to fade away — building software as a tree of components works, and it is almost always a good design. The residual borrow checker limitations are annoying, but don’t matter in the grand scheme of things.

    Concurrency

    IntelliJ Rust and rust-analyzer use similar approach to concurrency. There’s a global reader-write lock guarding the base application state and a large number of thread safe caches for derived data.

    Managing this in Kotlin is hard. More that once I asked myself “should I mark this as volatile?” without a clear way to get the answer. The way to figure out if something is supposed to be thread safe in Kotlin is to read the docs and hunt for all usages.

    In contrast, “is this type thread-safe?” is a property which is reflected in Rust’s type system (via Send and Sync traits). Compiler automatically derives thread-safeness, and checks that non thread-safe types are not accidentally shared.

    A bug which happened in both IntelliJ Rust and rust-analyzer is a good case study here. Recall that both make use of caches shared between the threads. In both projects, I once devised a smart optimization which unfortunately involved placing (unintentionally) thread unsafe data into this shared cache. In IntelliJ Rust, it took us a long while to notice that something is wrong in the first place, and even more investigation to pin down the root cause. In rust-analyzer, I only wasted the time for implementing the optimization itself. After I fixed what I though would be the last compilation error, compiler somberly noted that putting A which contains B which contains C which contains non thread-safe D into a structure which is shared across threads might not be the best idea!

    Performance

    My general experience with developing IntelliJ Rust is “no mater what I do, it is not as fast as I’d like it to be”. My experience with rust-analyzer is exactly the opposite “no matter what I do, it is fast enough”.

    As an anecdote, in the early days I was implementing fixed-point iteration name resolution algorithm in rust-analyzer. This is an IDE hostile bit. It requires to redo quite a bit of work on every keystroke, if done naively. When I build rust-analyzer with this change, I finally saw completion to noticeably lag. “This is it”, thought I, “I should stop just using naive algorithms and start applying some optimizations”. Well, turns out I took a debug version of rust-analyzer for a test drive! Rebuilding with --release fixed the issue.

    Aside: the fact that debug builds often are unusably slow is a big issue for Rust.

    Having baseline good performance definitely helps with productivity — optimizing code for performance usually makes it harder to refactor. The longer you can punt on the low-level performance optimizations (as opposed to the architecture-level performance work), the less total work you’ll do.

    Performance Predictability

    What is more important is that Rust’s performance is predictable. Generally, running a program N times gives more-or-less the same result. This is in sharp contrast to JVM, where you need to do a lot of warm-up to stabilize even microbenchmarks. I’ve never succeeded with reproducible macro benchmarks in IntelliJ Rust.

    More generally, without runtime there’s much less variation in the behavior of the program. This makes chasing regressions much more effective.

    Safety

    Just to be clear, one thing which was not different is memory safety: there were no segfaults or heap corruptions in either project. Similarly, null pointer dereferences weren’t an issue.

    These are the most significant benefits of Rust over other systems languages, but for the applications at hand they were irrelevant.

    Conclusions

    I think the unifying topic of many discussed points is “programming in the large”. Modularity, build process, predictability only start to matter once the code grows in volume, age, and the amount of contributors. I like Titus Winters formulation that “software engineering is programming integrated over time”. Rust excels at this kind of work, it is a scalable language.

    Another thing I’ve come to appreciate more is that Rust might be a plausible candidate for a nearly universal language. To throw in another great quote (by John Carmack), “the right tool for the job is often the tool you are already using”. Context switching and bridging different technologies takes a lot of effort. With Rust, you often don’t need to! It naturally scales down to the bare metal. As this post explores, it works fine for application-level programming as well. Rust even works for scripting to some extent! rust-analyzer’s build infra is in theory better suited for bash and Python, but, in practice, Rust works just fine, and is delightfully cross platform.

    Finally, I want to re-iterate that the present case study concerns only two projects which are similar, but not twins. The context is also important: not relying on third-party libraries for core functionality is a bit unusual for application programming. So, while I think that this experience and analysis point in the qualitatively right direction, you quantitative results may vary greatly!