Is it time for operator overloading in Java?

Love it or loathe it, this article makes the case that operator overloading is essential for making code easier to read, writer, and debug.

June 8, 2020

Download a PDF of this article

Operator overloading is one of those strange language features you either love or loathe. The loathing part is understandable, since misusing operator overloading can very quickly lead to confusing code—and more confusing bugs. I’ll talk about the loving part shortly, in the context of programming mathematical operations.

First, to define the term, let’s look to ISO’s C++ wiki: “Operator overloading allows C/C++ operators to have user-defined meanings on user-defined types (classes). Overloaded operators are syntactic sugar for function calls.”

Thus, if you define a class Foo, you should also be able to define an implementation for the plus operator such that FooBar = Foo + Bar;.

Operator overloading is widely considered to be a trivial language feature. Syntactic sugar is the term most frequently used to describe this phenomenon. The syntactic sugar part is true: Doesn’t nearly every programming language include a syntactic sugar abstraction that allows you to write complex machine code...without actually having to do that?

That being said, here’s a first look at why operator overloading is so often demonized in the Java domain. The crux of this skepticism is due to cout<<. The problem is symbolized by the << operator, which is so amiably overloaded in the following C++ statement:

cout<<"what the #@$#"<<endl; 

Which is equivalent to

System.out.println("what the #@$#");

Here, instead of shifting an object to the left, a string is piped into cout, the standard output (which is usually the console).

Some languages such as C++ allow you to overload some exotic operators such as the comma, which, strangely enough, does have some usages. Other languages allow you to define your own operators. There are some interesting usages, but unfortunately, they are beyond the scope of this article, which is to talk about the value of operator overloading in Java.

Overloading exotic operators aside, the fact of the matter is that if somebody overloads an operator and you use that operator, you are no longer guaranteed that the operation that is executed adheres to the mathematical purpose of that operator.

Consider the original example, FooBar = Foo + Bar;.

  • How do you know the + operator actually does anything resembling addition?
  • What if it changes either Foo or Bar?
  • Or, what if it just plainly returns a random FooBar on every execution?
  • What about side-effects?

Those are all good questions, but are not really unique to operator overloading. Let’s say you rewrote the expression as follows:

FooBar =;

Does that change really guarantee anything different? This article could go on for 20 pages about why almost anything harmful you can think of about operator overloading isn’t much different in the method-sphere. But my intention here is to answer the question of whether Java is ready for operator overloading, and how operator overloading can be useful. It’s time to answer that second question.

Is operator overloading useful, and if so, how?

Take a couple of seconds to look at the following piece of code. What do you think it means?

final BigDecimal result = a.multiply(x.pow(2)).plus(b.multiply(;

If your answer is that it’s a quadratic equation in standard form, as shown in Figure 1, you are most likely correct.

Quadratic equation

Figure 1. Quadratic equation

What would the code look like if you had used operator overloading, though? It code would look like this and would be easier to read.

final BigDecimal result = a * (x * x) + b * x + c;

Let’s take it up a notch. Say you want to derive x from the quadratic equation, so you rewrite the equation as shown in Figure 2:

Rewritten equation

Figure 2. Rewritten equation

What would the rewritten equation look like in code that uses BigDecimal without operator overloading?

final BigDecimal sqrt = b.pow(2).min(a.multiply(c).multiply(4)).sqrt();
final BigDecimal x1 = b.negate().plus(sqrt).divide(a.multiply(2));

What if you used the following code and operator overloading?

final BigDecimal sqrt = (b * b - 4 * a * c).sqrt();
final BigDecimal x1 = (-b + sqrt) / (2 * a);

It’s a matter of taste, but I think many developers, perhaps most developers, would find the operator overloading version more readable, more writable, and easier to validate.

More writable? That’s a strange thing to say. But the reason abstractions are used in code is for both readability and writability. In my opinion, 4*a*c is much easier to write than a.multiply(b).multiply(4). It is also simpler, which leaves less room for bugs.

Talking about bugs

I hope you will forgive me, but my first example actually has a bug in it that most people, myself included, would overlook. To give you a hint, I’ll rewrite the operator overloading version to also include the bug, and the bug will almost instantly become visible:

final BigDecimal result = a * (x * x) + b * (x + c);

The bug is a simple operator precedence defect, which is visible right away in the operator overloading version, but equally disastrous in both versions of the code.

A more subtle bug, which is solved for free when you lean on the compiler to interpret operator overloading for you, is the transitive associativity of user-defined types. Consider this example:

class Vector{ float x, y, z; }
final Vector result = vec1 + vec2 + vec3;
final Vector result =;

In 999 out of 1,000 cases, both these lines of code will produce the same results. But in some cases, the result is different. The reason is operator precedence.

What difference does it make whether you do a + b + c or c + b + a, you might ask? Aren’t these operations associative?

In the real world, yes. But in the digital depends.

As you can see, the vector class contains three fixed floating-point members. The way binary floating-point arithmetic works is that the result of an operation is represented by the closest representable binary value to it. Thus, if a.x + b.x = 0.23548787, the result might be computed as 0.2354877F in the code, because you have only a 32-bit wide range of values, and you can’t represent every exact fraction.

Applying that information to the code, you now can see that a + b produces an approximate value, which is then added to c, which again produces yet another approximate value.

Therefore, doing these operations in reverse is no longer associative, per the IEEE standard for floating-point arithmetic, since the running code might have a lossy conversion. The math just happens to go right most of the time, because the application uses representable fractions or the developer doesn’t care about the loss, for example, for calculations for onscreen graphics. Here’s an example:

final float a = 3.3333333f;
final float b = 0.6373606f;
final float c = 0.36263946f;

System.out.println(a + b + c);//4.3333335
System.out.println(c + b + a);//4.333333

Naturally you can still get these bugs with operator overloading if you write operations in the wrong order or add parentheses inappropriately, but the basic bug of having a closing parenthesis too soon or too late magically disappears.

So, for correctness, the bug-free version of the code is this:

final Vector result =;

These types of bugs are very common when you have to do any kind of math without operator overloading. Why? Functions aren’t really built to be called in this way. Our brains, brought up on school algebra, don’t work that way either.

To quote James Gosling in his work, “The Evolution of Numerical Computing in Java,” “Everyone I’ve talked to who does numerical work insists that operator overloading is totally essential. The method invocation style for writing expressions is so cumbersome as to be essentially useless.”

Another interesting fact about operators is that an operator is expressive of the side-effects it might cause. In other words, a plus operator by definition adds two operands, creating a result without changing either of the operands (unfortunately, this isn’t foolproof because most languages allow the programmer to do anything within an overloaded operator). An obvious example of this is the plus operator that is overloaded for String. A plus method, however, makes no such assumption, since a method is understood to be allowed to do whatever it wants.

This might be marginally interesting for a programmer, but certainly a compiler could perform some magical optimizations knowing that the operator basically performs a read-only operation.

So is Java ready for operator overloading?

Raising this question is really like shooting oneself in the foot, since it’s nearly impossible to answer. So instead, let’s talk about needs and benefits. Interestingly enough, the need has inadvertently risen dramatically with the rise of Java applications in the data analytics and financial trading realms, many of which rely on advanced mathematical formulations.

Yet another interesting development is in gaming. The advent of Minecraft (one of the best-selling games in history), which was fully written in Java, signaled that Java can rub elbows with the best of them in that space. And of course, 3D game engines are nothing but fancy algebra and geometry.

Other JVM languages such as Clojure, Groovy, Kotlin, and Scala all chose to implement operator overloading in one way or another. I believe this is also a clear signal that the Java community may be open for such a development.

Project Valhalla will provide Java developers with value types, which are user-defined objects that act and perform like primitive types. This means that you could define your own int64 type, for instance, or unsigned_int32. However, having to do math with those shiny new value types through function calls would be disheartening. That sounds like a perfect use case for operator overloading.

As a side note, an int64 would fit in a CPU register for certain processor architectures, which also means that further JVM optimization could be done to perform arithmetic using 64-bit instructions. This would be easier if Java had operator overloading or some other mechanism to denote that a specific function is a mathematical operation of the addition type, for example, for 64-bit wide integers.


Every language feature or library out there is potentially a double-edged sword. Some features are clear regarding the damage they might cause, making it easier to sidestep the damage, while others are very subtle and foreign, making damage harder to avoid.

The reason best practices are introduced is to minimize damage and educate developers about the as-yet-unknown. Operator overloading is no different in that regard. It is a very helpful tool that, when used properly, will save a lot of time when you are programming math problems and debugging math errors. That means less time wasted and more time developers can use to solve the actual problems they are trying to solve.

Mahmoud Abdelghany

Mahmoud Abdelghany is an engineer with Blue4IT, a Dutch consultancy firm that specializes in Java technologies. He spends a lot of time tinkering with games, which has led him to spend the better part of a year researching operator overloading. Follow Mahmoud on Twitter.

Share this Page