Serialization in JVM, Android and Kotlin Multiplatform: Complete Analysis of Serializable, Externalizable, Parcelable and kotlinx.serialization
An architectural study of different approaches to transforming objects into bytes. We analyze four fundamental approaches based on benchmarks: reflective automation of Serializable, manual control of Externalizable, platform optimization of Parcelable, and universal code generation of kotlinx.serialization. We explore not only how these mechanisms work, but why they were created this way, what trade-offs are embedded in their architecture, and how they perform in different environments, from server JVM to mobile Android to cross-platform Kotlin.
Introduction
In this article, we will examine technologies for serialization and deserialization of data in the context of the Android, Java, Kotlin JVM, and Kotlin Native and KMP ecosystems.
Serialization and deserialization are fundamental operations in modern development. These processes are used everywhere: from storing data in device local storage to transmitting information over the network. Many NoSQL databases use their own serialization formats to optimize performance. The most common data exchange formats (JSON, XML, Protocol Buffers, MessagePack, and others) are closely related to serialization processes.
There are two main approaches to implementing serialization. The first is based on reflection, which allows analyzing object structure at runtime but comes with performance overhead. The second approach uses code generation at compile-time, which significantly speeds up data processing by generating specialized code. For example, Protocol Buffers requires prior data contract description in .proto files, which allows generating optimized code and eliminating redundant operations during deserialization.
When evaluating serialization solutions, processing speed is usually the priority, followed by the size of serialized data, and only then memory consumption. However, for mobile devices and embedded systems, the balance of these factors may shift toward minimizing memory and data size.
Within this article, we will analyze existing libraries and approaches to object serialization in JVM and Native ecosystems. We will examine in detail four main approaches: classic Java Serializable, its extension Externalizable, Android-specific Parcelable, and modern kotlinx.serialization. For each, we will conduct comparative performance testing and determine which solutions are most effective in various usage scenarios. In addition to obtaining concrete metrics, we will also explore the technical reasons that determine the performance of each approach.
Developers often claim that Parcelable is better than Serializable, but cannot always argue exactly what this advantage consists of: execution speed, memory usage, or data size. Recently, migration to kotlinx.serialization has been gaining popularity. For what purpose? And why have many not even heard of Externalizable?
New technologies don’t appear out of nowhere. Each is born as a response to the limitations of its predecessors. Without understanding the evolution of serialization, from classic solutions to modern alternatives, it’s difficult to objectively evaluate their strengths and weaknesses.
In interviews, superficial statements are common: “X is faster than Y” or “Z is better than W”. But when the question deepens: “Why faster? How does it work under the hood?”, there’s often a pause. Knowledge turns out to be borrowed from article headlines or others’ opinions.
Even if documentation states that one approach surpasses another, it’s important to independently verify and analyze performance in the context of your tasks. In development, as in engineering, everything is relative.
Beginning now with the analysis, first, for terminology uniformity, let’s start with definitions of basic concepts and terms.
Fundamentals
Serialization is the process of converting structured data (objects, data structures) into a sequence of bytes or text representation suitable for storage or transmission. Serialization “packages” an object’s state into a format that can be saved to a file, transmitted over a network, or placed in a database.
Deserialization is the reverse process of restoring an object from its serialized representation. Deserialization “unpacks” a byte sequence or text back into a structured object while preserving its type and data.
Reflection is a runtime mechanism that allows a program to analyze and modify its own structure and behavior. In the context of serialization, reflection is used to dynamically traverse object fields without prior code generation. Naturally, in the informal world of development, reflection is considered dark magic.
Code Generation is the automatic creation of source code during compilation based on annotations, data schemas, or other metadata. Code generation eliminates reflection overhead by creating specialized serializer classes. Code generation itself is achieved through compiler plugins or code analyzers. In the JVM world, this was APT for a long time, later KAPT, then Kotlin’s evolution brought KSP. Starting with Kotlin 2.0, there’s now also the ability to implement compiler plugins, which was previously closed to third-party developers.
Data Contract/Schema is a formal description of data structure, defining field types, their names, and validation rules. Used in Protocol Buffers (.proto files), Apache Avro, and other schema-oriented formats.
Now that we’ve defined basic terminology, let’s move on to examining specific serialization methods. We’ll consider them in order of historical evolution from the perspective of Kotlin and Android. Interestingly, in the Java world, evolution as such didn’t occur: the first solution that appeared in early JDK versions continues to be used to this day.
First, we’ll go through each approach, examine them under a magnifying glass, but won’t draw performance conclusions until we’ve considered all methods. In the end, we’ll compare each approach to understand which is most optimal in use.
The Serializable Interface
Let’s begin our deep dive with the Serializable interface, which is the earliest and perhaps most widespread serialization method in the JVM world. Serializable is a marker interface, meaning an interface containing no logic. Let’s look at its source code:
package java.io;
public interface Serializable {
}As you can see, the interface is completely empty. A logical question arises: how can an empty interface do anything? Actually, Serializable works as a marker for the JVM, signaling that the class allows its serialization. All the magic happens at runtime through reflection, which we’ll discuss in detail later.
Let’s create a simple class and see how serialization works in practice. For an example, let’s take a Person class with basic information about a person:
data class Person(
val name: String,
val dateOfBirth: Int,
val address: String
): SerializableNotice that to turn a regular class into a serializable one, we just need to add : Serializable after the class declaration. No additional methods or fields need to be implemented.
Now let’s try to serialize an object of this class and save it to a file. The process will look as follows:
fun main(args: Array<String>) {
val person = Person("John Wick", 1964, "New York")
val file = File("serialization.bin").apply(File::createNewFile)
val fileOutputStream = FileOutputStream(file)
val objectOutputStream = ObjectOutputStream(fileOutputStream).use { stream ->
stream.writeObject(person)
stream.flush()
}
}What’s happening here? We create a Person object with John Wick’s data, then create a serialization.bin file, open a stream (FileOutputStream) for writing to this file, and finally wrap it in an ObjectOutputStream. The writeObject(person) call triggers the entire serialization mechanism, turning our object into a byte sequence.
After running the code, we get a serialization.bin file with our serialized data. Out of curiosity, let’s try to open this file in a text editor. Here’s what we’ll see:
�� sr application.Person2�v��9� I dateOfBirthL addresst Ljava/lang/String;L nameq ~ xp �t New Yorkt John WickLooks scary, doesn’t it? The file is clearly not intended for human reading due to the binary format, and the text editor struggles to interpret bytes as UTF-8 characters. Nevertheless, if you look closely, among the “gibberish” you can discern quite readable fragments: field names (name, address), their types (Ljava/lang/String), class name (Person), and even field values (New York, John Wick).
You might wonder: if serialization is turning an object into bytes, why do we see not zeros and ones, but relatively readable text? The answer is simple: the text editor tries to interpret bytes as UTF-8 characters. Some byte sequences accidentally coincide with printable character codes, so we see these text fragments among incomprehensible symbols.
But where did all this information about our class structure come from in the file? We never explicitly specified which fields to save and what to call them. Here’s where it gets interesting: Serializable works entirely based on Reflection API. The JVM automatically analyzes our class structure at runtime and extracts all necessary metadata.
Reflection and all the magic of Serializable manifests from the moment ObjectOutputStream is created, or more precisely, from the moment the writeObject function is called. It’s in this method that all the main work happens: it takes our Person class object, serializes it, and then writes it to the serialization.bin file.
It’s important to understand the architecture: ObjectOutputStream itself actually knows nothing about our serialization.bin file. Instead, ObjectOutputStream works with FileOutputStream, which acts as an intermediary between the file and the object. This is the classic “Decorator” design pattern. Actually, most types of Streams (descendants of InputStream and OutputStream) are decorators, adding additional functionality to the base stream.
The deserialization process works mirror-like:
fun main(args: Array<String>) {
val file = File("serialization.bin")
val fileInputStream = FileInputStream(file)
val objectInputStream = ObjectInputStream(fileInputStream).use { stream ->
print(stream.readObject() as Person)
}
}We find the file and then pass it to FileInputStream to restore the Person object from the saved set of bytes. Let’s make sure the Serializable interface is actually necessary. If we remove the inheritance from the Serializable interface from the Person class, we’ll get this error:
Exception in thread "main" java.io.NotSerializableException: PersonThis error confirms that the marker interface Serializable is a mandatory condition for object serialization.
Internal Structure of Serializable
Now let’s dive deeper and look at how serialization works under the hood. Let’s examine the source code of the writeObject method of the ObjectOutputStream class, as it’s the one that takes the Person class object and converts it into a set of bytes:
public class ObjectOutputStream extends OutputStream implements ObjectOutput, ObjectStreamConstants {
public final void writeObject(Object obj) throws IOException {
if (enableOverride) {
writeObjectOverride(obj);
return;
}
try {
writeObject0(obj, false);
} catch (IOException ex) {
if (depth == 0) {
writeFatalException(ex);
}
throw ex;
}
}
}This method is a kind of Provider, if the class inherited and overrode the public method for overriding, then the descendant’s logic will be used, otherwise the call is passed to the writeObject0 method which is standard for ObjectOutputStream. In our case, we didn’t inherit from ObjectOutputStream, so we end up in the writeObject0 method:
public class ObjectOutputStream extends OutputStream implements ObjectOutput, ObjectStreamConstants {
private void writeObject0(Object obj, boolean unshared)
throws IOException
{
...
ObjectStreamClass desc = ObjectStreamClass.lookup(cl, true);
...
if (obj instanceof String) {
writeString((String) obj, unshared);
} else if (cl.isArray()) {
writeArray(obj, desc, unshared);
} else if (obj instanceof Enum) {
writeEnum((Enum<?>) obj, desc, unshared);
} else if (obj instanceof Serializable) {
writeOrdinaryObject(obj, desc, unshared);
} else {
if (extendedDebugInfo) {
throw new NotSerializableException(
cl.getName() + "\n" + debugInfoStack.toString());
} else {
throw new NotSerializableException(cl.getName());
}
}
...
}
}The writeObject0 method contains extensive logic for checking the object (for example, checks for not nullable, no classable, not streamable). In the fragment above, only the part of logic that directly handles standard object serialization is left.
Note the important line we specifically left in the code: creating an ObjectStreamClass object. This object plays a key role in the entire serialization process. Let’s look more closely at the ObjectStreamClass.lookup(cl, true) call. This is where the real serialization work begins. This method creates a class descriptor (an ObjectStreamClass object) that captures all the type metadata needed to write it to the stream. Essentially, this is the serialization analog of reflection, but not at the execution level, but at the protocol (contract) level.
The lookup method first checks an internal cache. If a descriptor for this class already existed, it’s returned again. If not, a new one is created. During creation, a complete class structure analysis is performed: serializable object fields are determined, serialVersionUID is calculated, the presence of special methods (writeObject, readObject, readResolve, writeReplace) is checked, a link to the parent descriptor is established, flags indicating the nature of the class (enum, proxy, externalizable, record) are set.
The true parameter in the second argument means that descriptors are created not only for the class itself, but for the entire chain of its serializable ancestors. This is important: serialization in Java always knows the object structure up to the first non-Serializable parent, and it’s lookup that forms this hierarchy.
The result of the call becomes a desc object, which will be passed to writeOrdinaryObject. All subsequent steps (writing signature, serialVersionUID, field set and their values) are performed strictly according to what’s described in this descriptor. If you change the descriptor, the byte representation will change.
Thus, ObjectStreamClass.lookup is the transition point from the code level to the serialization protocol level. Up to this point, the JVM worked with the object as an instance of a type, after which it works with a set of described structures and bytes, having all information about who is before it now. This call itself takes a lot of time and memory the first time, this is the first salvo of using reflection for serialization.
Let’s return to the writeObject0 method and consider the sequence of checks it performs. After creating the class descriptor, a cascade of object type checks begins:
- First, it checks if the object is a string (
String). String values are handled specially, as they’re serialized more efficiently. - Next, it checks if the class is an array.
- Then comes a check for
enum.
Any enum in JVM is Serializable by default, because all enum classes implicitly inherit from the actual java.lang.Enum class. And the java.lang.Enum class already inherits from the Serializable interface
- Finally, if the object doesn’t fall under any of the special categories, it checks if it implements the
Serializableinterface.
As a result of all these checks, one of two things happens: either we fall under the Serializable category and continue serialization, or we get a NotSerializableException error.
For classes implementing Serializable, the writeOrdinaryObject method is called. After writeObject0 determines that this object actually implements the Serializable interface, control is passed to this method. It’s responsible for writing an “ordinary” object to the stream, that is, an object that is not a string, array, enumeration, or instance of Externalizable (which we’ll cover later in the article). This is where the real work of the serialization mechanism begins. The method looks as follows, we’ll see its breakdown next:
public class ObjectOutputStream extends OutputStream implements ObjectOutput, ObjectStreamConstants {
private void writeOrdinaryObject(Object obj,
ObjectStreamClass desc,
boolean unshared)
throws IOException
{
...
desc.checkSerialize();
bout.writeByte(TC_OBJECT);
writeClassDesc(desc, false);
handles.assign(unshared ? null : obj);
if (desc.isRecord()) {
writeRecordData(obj, desc);
} else if (desc.isExternalizable() && !desc.isProxy()) {
writeExternalData((Externalizable) obj);
} else {
writeSerialData(obj, desc);
}
}
...
}
}The first thing writeOrdinaryObject does is call desc.checkSerialize(). This call is not just a formality, but a guarantee that the class described by ObjectStreamClass desc satisfies all requirements of the serialization contract. Here verification of the Serializable flag occurs, checking the correctness of special method signatures writeObject, readObject, readResolve, writeReplace, as well as checking correspondence of the calculated serialVersionUID to what is explicitly declared. If the class violates the contract, the stream is interrupted with a NotSerializableException exception. Thus, serialization will never start for an object that cannot pass these structural checks.
After successful validation, the first control byte TC_OBJECT is written to the stream. This byte represents a serialization marker used by ObjectInputStream when reading the stream to recognize that what follows is specifically an object structure, not a string, array, reference, or other element type. The Object Serialization mechanism in Java uses a fixed binary protocol, where each element (object, class, field, array, etc.) is preceded by its marker. For an object, this is 0x73, i.e., the TC_OBJECT byte. Thus, object writing always starts with this marker.
Next, writeClassDesc(desc, false) is called. At this point, serialization moves from a concrete instance to its class description. The writeClassDesc method is responsible for writing the class descriptor, i.e., a structure containing the class name, serialVersionUID, number and types of serializable fields, as well as references to superclasses. If this descriptor was already encountered earlier in the stream, then instead of a full description, a TC_REFERENCE reference is written, pointing to the already existing descriptor. This saves space and maintains stream structure consistency. If the class is being serialized for the first time, then the name, version, field list, and other metadata are sequentially written to the stream. It’s thanks to writeClassDesc that deserialization on the other side is able to understand how to restore the object: which class to use, which fields to read, and in what order.
After writing the descriptor, handles.assign(unshared ? null : obj) is executed. This is a key moment related to the handle table. Object Serialization in Java guarantees preservation of reference integrity: if the same object occurs multiple times in the graph, the serializer won’t write it again, but will write a reference (TC_REFERENCE) to the already written instance. For this, all objects that have passed through the stream are registered in a handle table, where each object is assigned a unique identifier. The assign call performs exactly this assignment. If the object is marked as unshared, it’s not registered, and it cannot be referenced again in the future. Such a flag is used rarely, but has significance when there’s a need to completely break referential connectivity between parts of the serializable graph.
At the next stage, writeOrdinaryObject determines the specific nature of the class. Here begins a fork into three categories:
- If the class is a
record,writeRecordDatais called. - If the class implements
Externalizable(and is not a proxy),writeExternalDatais called. - In all other cases, the classic
writeSerialDatabranch is used.
Let’s consider them sequentially.
1. Record. If desc.isRecord() returns true, then the current class is a Java Record. Serialization of record-classes follows separate logic, introduced starting with Java 16. Unlike regular classes, a record has no mutable state, all its fields are components defined in the constructor signature. The writeRecordData method goes through all record components in their declaration order and writes their values directly, without calling writeObject or writeExternal. This ensures a stable, deterministic serialization format, independent of user overrides.
If you’re from that same (old) school where the last familiar Java version is 8 or 11, then record-classes might have passed you by. They appeared only with Java 16 and became for Java what data class has long been for Kotlin, that is, a concise way to declare an immutable data structure where constructor, equals, hashCode, and toString are generated by the compiler. The difference is only that Java did this without excessive syntactic romanticism.
2. Externalizable. We’ll also cover Externalizable in detail in this article, but for now it’s worth considering for context. If a class implements the Externalizable interface, control is passed to writeExternalData. In this case, serialization is completely delegated to the object itself. The writeExternalData method calls obj.writeExternal(ObjectOutput). Here the class itself decides what data and in what order to write to the stream. Unlike Serializable, where the platform manages serialization automatically, Externalizable provides complete freedom, but also complete responsibility to the developer. It’s important to note that writeOrdinaryObject calls this path only if the class is actually Externalizable and is not a proxy, since dynamic proxies are handled differently.
3. Serializable (regular case). If the object is neither a record nor Externalizable, classic Serializable serialization remains. In this case, writeSerialData(obj, desc) is called. This method is the core of standard Java serialization. It’s responsible for sequential writing of all serializable object fields, including those inherited from superclasses, as well as for calling user methods writeObject, if they are defined in the class. Inside writeSerialData, superclass data is written first, then current class fields. If the class defines a private void writeObject(ObjectOutputStream oos) method, it’s called with the current stream passed, which allows overriding the standard writing format. If no such method exists, defaultWriteFields is called, which simply writes all fields according to the description from ObjectStreamClass desc.
Thus, writeOrdinaryObject doesn’t directly contain the logic for writing data itself. It only determines the route, i.e., which strategy to apply for a specific class type. This is a routing point, a kind of serialization dispatcher, ensuring protocol uniformity while maintaining flexibility for different types.
After executing one of the branches (record, externalizable, or serializable), the object is fully written to the stream, and the handle table is fixed to maintain reference integrity. We’re particularly interested in the third branch, i.e., regular serialization through Serializable. Let’s look at the writeSerialData method in more detail:
public class ObjectOutputStream extends OutputStream implements ObjectOutput, ObjectStreamConstants {
private void writeSerialData(Object obj, ObjectStreamClass desc)
throws IOException
{
ObjectStreamClass.ClassDataSlot[] slots = desc.getClassDataLayout();
for (int i = 0; i < slots.length; i++) {
ObjectStreamClass slotDesc = slots[i].desc;
if (slotDesc.hasWriteObjectMethod()) {
PutFieldImpl oldPut = curPut;
curPut = null;
SerialCallbackContext oldContext = curContext;
try {
curContext = new SerialCallbackContext(obj, slotDesc);
bout.setBlockDataMode(true);
slotDesc.invokeWriteObject(obj, this);
bout.setBlockDataMode(false);
bout.writeByte(TC_ENDBLOCKDATA);
} finally {
curContext.setUsed();
curContext = oldContext;
}
curPut = oldPut;
} else {
defaultWriteFields(obj, slotDesc);
}
}
}
}As we remember from the previous section, this method is called for objects that are neither record nor externalizable, but represent regular classes implementing Serializable. The writeSerialData method is the place where serialization closes in most cases. If writeOrdinaryObject can be called a router, then writeSerialData is the executor, this is where the actual writing of object state occurs.
At the beginning of the method, a slots array is formed, obtained through desc.getClassDataLayout(). This is an internal representation of the hierarchy of classes participating in serialization. Each array element represents a ClassDataSlot containing a reference to the ObjectStreamClass of a specific inheritance level. Thus, slots sets a strict order of traversing the class chain from top to bottom, from the earliest ancestor that declared serializable fields to the final descendant.
Next, a loop through these slots is executed. For each class represented in slotDesc, the serializer checks if a custom writeObject(ObjectOutputStream) method is defined in it. The check is performed by calling slotDesc.hasWriteObjectMethod(). This is the same capability that allows a class to intervene in the serialization process and partially control what data and in what form will go to the stream.
If a custom writeObject is found, a serialization callback context is created, representing a SerialCallbackContext object. It’s necessary for correct management of nested calls, particularly to ensure symmetrical work with readObject during deserialization. After this, block write mode is enabled (bout.setBlockDataMode(true)), which groups data written during custom writeObject into a single block. This guarantees that the entire custom data segment will be interpreted when reading as one logical whole.
Next, the method itself is called: slotDesc.invokeWriteObject(obj, this). This is the point where actual control is passed to user code. If the class overrode writeObject, its logic is executed here, with the ability to directly call defaultWriteObject() or manually write individual fields. After the block completes, writing returns to normal mode (bout.setBlockDataMode(false)), and then the TC_ENDBLOCKDATA byte is added to the stream, marking the end of custom data.
All temporary structures (curPut, curContext) are restored so the serializer state remains consistent. If extendedDebugInfo is enabled, the debug information stack is cleared. In the absence of a custom method, the standard path is executed: defaultWriteFields(obj, slotDesc). This method sequentially goes through all fields defined in ObjectStreamClass and writes their values using appropriate serialization mechanisms (for primitives this is direct binary value, for reference types this is recursive writeObject0 call).
This is where the real work of object serialization finishes. After the loop through all slots completes, the stream contains the complete binary structure of the instance, from base classes to final fields, taking into account all custom overrides. Thus, writeSerialData is the point where the logical class model turns into a byte stream. Everything prepared before this (descriptors, handle tables, metadata) serves only as infrastructure ensuring these bytes can be restored back to an identical object. After writeSerialData completes, the object is considered fully serialized, and the stream is ready to move to the next element.
Control Over the Serialization Process
Now let’s talk about several important mechanisms that allow controlling the serialization process. Imagine a situation: you serialized an object and saved it to a file. A month later you changed the class, added a new field or removed an old one. What will happen if you try to deserialize the old file? The JVM might simply refuse to do this, throwing InvalidClassException. To solve this problem, there’s a special serialVersionUID field. This is a unique class version identifier that’s written to the stream during serialization. During deserialization, the JVM compares serialVersionUID from the stream with the serialVersionUID of the current class. If they match, deserialization continues; if not, an exception is thrown. If you don’t specify serialVersionUID explicitly, the JVM will calculate it automatically based on class structure (field names, methods, access modifiers). The problem is that any slightest change in the class will change this hash, and old serialized objects will become incompatible.
data class Person(
val name: String,
val dateOfBirth: Int,
val address: String
) : Serializable {
companion object {
private const val serialVersionUID: Long = 1L
}
}Sometimes a class has fields that shouldn’t or can’t be serialized. For example, these might be temporary calculated values, caches, or sensitive data like passwords. In Java, the transient keyword exists for this, while Kotlin uses the @Transient annotation. During serialization, all marked fields will be ignored, and during deserialization they’ll receive default values (null for objects, 0 for numbers, false for boolean).
data class User(
val username: String,
@Transient val password: String = "",
@Transient val cache: Cache? = null
) : SerializableWhat if standard serialization isn’t enough? Maybe you want to encrypt data before writing or perform some preprocessing? For this, you can override special writeObject and readObject methods:
class SecureUser(
val username: String,
private var password: String
) : Serializable {
@Throws(IOException::class)
private fun writeObject(out: ObjectOutputStream) {
out.defaultWriteObject()
val encrypted = encrypt(password)
out.writeObject(encrypted)
}
@Throws(IOException::class, ClassNotFoundException::class)
private fun readObject(input: ObjectInputStream) {
input.defaultReadObject()
val encrypted = input.readObject() as String
password = decrypt(encrypted)
}
companion object {
private const val serialVersionUID = 1L
}
}Notice that these methods must be private. This might seem strange, since usually private methods aren’t called from outside, but the JVM uses reflection to call them.
Another interesting case is related to singletons. During deserialization, the JVM will create a new object instance, breaking the Singleton pattern. You’ll have two “singletons”! To avoid this, the readResolve method is used:
object DatabaseConnection : Serializable {
private const val serialVersionUID = 1L
var host: String = "localhost"
var port: Int = 5432
private fun readResolve(): Any = DatabaseConnection
}The method is called immediately after object deserialization and can return either the same object or a completely different one. In the case of a singleton, we simply return the existing instance, ignoring the deserialized one. Similarly, writeReplace() works, which is called before serialization and is useful when you want to serialize an object in a more compact form.
Despite its apparent simplicity, Serializable hides many problems. Using reflection makes serialization slow; each time during serialization, the JVM analyzes class structure through Reflection API. Java’s binary format contains a lot of metadata (class names, packages, field types), which increases the size of serialized data. Changing class structure easily breaks compatibility; even adding a new method can change the automatically calculated serialVersionUID. Deserializing untrusted data can lead to vulnerabilities when an attacker creates a specially crafted byte stream that, when deserialized, will execute malicious code. Finally, during deserialization, the JVM creates an object, bypassing the constructor, which means any validation checks in the constructor will be ignored.
At the same time, all those capabilities we just examined (serialVersionUID, transient, writeObject, readResolve) don’t actually solve the main problem. Our intervention is minimal. These mechanisms are more like hooks or standard configurations that allow Serializable to work correctly in specific cases. The serialVersionUID field is needed for versioning, readResolve preserves singletons, @Transient excludes unnecessary fields. But none of these methods give us real control over the serialization and deserialization process. We still can’t influence performance, can’t optimize data size, can’t change the write format. The JVM continues using reflection, continues writing all metadata, continues working slowly. We’re just passengers in this process, allowed to adjust a couple of parameters.
What if we want more? What if we need real control over exactly how our objects are serialized? For such cases, Serializable has a brother on steroids.
The Externalizable Interface
If you remember, at the beginning of the article we said that Serializable is a marker interface without a single method. The JVM sees this marker and automatically starts the reflection mechanism. Externalizable works completely differently. This isn’t a marker, it’s a contract with two explicit methods:
public interface Externalizable extends java.io.Serializable {
void writeExternal(ObjectOutput out) throws IOException;
void readExternal(ObjectInput in) throws IOException, ClassNotFoundException;
}Notice that Externalizable inherits from Serializable. This is an important detail, meaning that Externalizable objects still participate in the general Java serialization mechanism, but with a fundamentally different approach. The JVM no longer uses reflection to traverse fields. Instead, it simply calls your writeExternal and readExternal methods, completely shifting responsibility to you. Sounds a lot like Android’s Parcelable, doesn’t it?
Let’s rewrite our Person class using Externalizable:
class Person(
var name: String = "",
var dateOfBirth: Int = 0,
var address: String = ""
) : Externalizable {
override fun writeExternal(out: ObjectOutput) {
out.writeUTF(name)
out.writeInt(dateOfBirth)
out.writeUTF(address)
}
override fun readExternal(input: ObjectInput) {
name = input.readUTF()
dateOfBirth = input.readInt()
address = input.readUTF()
}
}The first feature is immediately visible: the class must have a no-argument constructor. This is a critical requirement. During deserialization, the JVM first creates a class instance through this constructor, then calls readExternal to fill the fields. If the constructor is absent, you’ll get InvalidClassException. In Kotlin, this is solved through parameters with default values, as shown above.
Next, let’s try to serialize as well, this time our class implements Externalizable, so we’ll name the file “externalization.bin”:
fun main(args: Array<String>) {
val person = Person("John Wick", 1964, "New York")
val file = File("externalization.bin").apply(File::createNewFile)
val fileOutputStream = FileOutputStream(file)
val objectOutputStream = ObjectOutputStream(fileOutputStream).use { stream ->
stream.writeObject(person)
stream.flush()
}
}Let’s try to open the file as text, we see there’s much less information than in the Serializable case. First comes the full class name. Next, it’s striking that field names and explicit value types bound to classes are absent. Behind the unreadable text are service markers of the serialization protocol and byte sequences corresponding to data written in writeExternal. These markers, such as STREAM_MAGIC, STREAM_VERSION, TC_OBJECT, TC_CLASSDESC, TC_STRING, TC_ENDBLOCKDATA, TC_NULL, TC_REFERENCE, TC_BLOCKDATA and others, play the role of structural separators, allowing the JVM during deserialization to understand where each element begins and ends, and also to determine their type and context.
�� sr application.Person���O�! xpw John Wick � New YorkxNow let’s see what happens inside. Remember the writeOrdinaryObject method from the Serializable breakdown? That same cascade of object type checks? There was a check for Externalizable, and if the class implements this interface, control is passed to the writeExternalData method.
Let’s look at its implementation:
public class ObjectOutputStream extends OutputStream implements ObjectOutput, ObjectStreamConstants {
private void writeExternalData(Externalizable obj) throws IOException {
PutFieldImpl oldPut = curPut;
curPut = null;
SerialCallbackContext oldContext = curContext;
try {
curContext = null;
if (protocol == PROTOCOL_VERSION_1) {
obj.writeExternal(this);
} else {
bout.setBlockDataMode(true);
obj.writeExternal(this);
bout.setBlockDataMode(false);
bout.writeByte(TC_ENDBLOCKDATA);
}
} finally {
curContext = oldContext;
}
curPut = oldPut;
}
}The code looks significantly simpler than all that complex machinery with field traversal through reflection that we saw in writeSerialData. Yes, the class descriptor is still created through ObjectStreamClass.lookup in the writeOrdinaryObject method before calling writeExternalData, this is necessary to write information about the class itself (its name, hierarchy). But what’s not here is recursive traversal of the class hierarchy to write fields at each level, there’s no call to defaultWriteFields, which reads values of all fields through reflection. The JVM simply calls obj.writeExternal(this), passing control to your code. All responsibility for what data and how to write lies with you.
Notice the work with block data mode (setBlockDataMode). This is a technical point that ensures the correct serialization stream structure. In PROTOCOL_VERSION_2 (which is used by default since Java 1.2), data is written in blocks, and each block ends with the TC_ENDBLOCKDATA marker. This allows the JVM to correctly determine object data boundaries in the stream.
The deserialization process works mirror-like. Instead of the complex mechanism of restoring fields through reflection, the JVM creates an object through the no-argument constructor and calls readExternal:
public class ObjectOutputStream extends OutputStream implements ObjectOutput, ObjectStreamConstants {
private void readExternalData(Externalizable obj, ObjectStreamClass desc)
throws IOException {
SerialCallbackContext oldContext = curContext;
if (oldContext != null)
oldContext.check();
curContext = null;
try {
boolean blocked = desc.hasBlockExternalData();
if (blocked) {
bin.setBlockDataMode(true);
}
if (obj != null) {
try {
obj.readExternal(this);
} catch (ClassNotFoundException ex) {
handles.markException(passHandle, ex);
}
}
if (blocked) {
skipCustomData();
}
} finally {
if (oldContext != null)
oldContext.check();
curContext = oldContext;
}
}
}Again we see how much simpler this is compared to Serializable. No metadata restoration, no recursive reading of class hierarchy. Call obj.readExternal(this) and that’s it. You decide yourself in what order to read fields and how to interpret them.
Here it’s important to understand the key difference. When using Serializable, the JVM automatically writes class metadata to the stream (field names, types, package information). Remember that “dirty” serialization.bin file output, where we saw field and type names among bytes? All this is metadata that the JVM adds automatically. With Externalizable, this doesn’t happen. Only the data you explicitly wrote goes to the stream. This makes serialized objects significantly smaller in size.
But with great power comes great responsibility. You must guarantee that the write order in writeExternal exactly matches the read order in readExternal. If you write first String, then Int, then String, you’re obligated to read in the same order. Any mismatch will lead to incorrect deserialization or an exception. The JVM no longer watches over this for you.
class Person(
var name: String = "",
var dateOfBirth: Int = 0,
var address: String = ""
) : Externalizable {
override fun writeExternal(out: ObjectOutput) {
out.writeUTF(name)
out.writeInt(dateOfBirth)
out.writeUTF(address)
}
override fun readExternal(input: ObjectInput) {
name = input.readUTF() // Order matches!
dateOfBirth = input.readInt() // Order matches!
address = input.readUTF() // Order matches!
}
}Another important point concerns versioning. With Serializable, we used serialVersionUID to control version compatibility. With Externalizable, you can implement your own versioning logic:
class Person(
var name: String = "",
var dateOfBirth: Int = 0,
var address: String = "",
var phoneNumber: String = ""
) : Externalizable {
companion object {
private const val VERSION = 2
}
override fun writeExternal(out: ObjectOutput) {
out.writeInt(VERSION)
// ... writing other fields
out.writeUTF(phoneNumber) // New field in version 2
}
override fun readExternal(input: ObjectInput) {
val version = input.readInt()
// ... reading other fields
if (version >= 2) {
phoneNumber = input.readUTF()
}
}
}This approach gives flexibility in managing backward compatibility. You can add new fields, change data format, and all this will work as long as your logic in readExternal correctly handles different versions.
Externalizable performance is theoretically higher than Serializable because there’s no reflection overhead. But this doesn’t mean an automatic win. If you write inefficient code in writeExternal or readExternal, performance can be worse. We’ll see real numbers in the benchmarks section.
When should you use Externalizable? When you need full control over data format, when serialized object size is critical, or when standard serialization works inefficiently for your data structure. But remember that with this control comes responsibility for implementation correctness. One mistake in read/write order, and your deserialization will break in ways that are difficult to diagnose.
From JVM to Android: Why Externalizable Didn’t Fit
We’ve covered both approaches to serialization in the JVM ecosystem. Serializable provides ease of use but pays for it with performance and data redundancy. Externalizable provides control but requires more code and attention. It would seem the perfect solution has been found, especially for mobile devices where both performance and data size matter.
But when Google was developing Android, engineers faced a fundamental problem. Android isn’t just Java on a mobile device. It’s an ecosystem with hard constraints: limited memory, battery, processors with less computational power (at the time of Android’s creation). But mainly, it’s a specific inter-process communication (IPC) architecture through the Binder mechanism.
Let’s understand what the problem is. Both serialization mechanisms we’ve examined were developed for JVM with certain assumptions. First: serialization is usually used for long-term storage or network transmission. Second: the overhead of creating streams (ObjectOutputStream, ObjectInputStream) is acceptable because data is then transmitted somewhere far (to disk, over network). Third: the format must be compatible between different Java versions and even different JVMs.
In Android, everything is different. When you start a new Activity, pass data to a Service, or send a broadcast, this isn’t a network operation or disk write. This is IPC between processes on one device through Binder. Objects need to be serialized and deserialized not for sending to another country, but for passing to a neighboring process. This happens constantly, hundreds of times per second. Every extra byte, every extra operation directly affects interface responsiveness.
Try using Serializable for passing Intent with data between Activities. It works, Android supports this. But behind the scenes, the following happens: ObjectOutputStream is created, reflection mechanism starts (even despite Dalvik/ART, it’s still slow), class metadata is written, many temporary objects are created, ObjectInputStream is created on the other side, reverse reflection starts, temporary objects are created again. And here’s where the real problem for Android begins. Each temporary object is work for the Garbage Collector. On old Android devices with limited memory and primitive GC, garbage collection pauses directly affect interface smoothness. The user sees slowdowns, lags, freezes. All this to pass an object to a process that’s right next door. It’s like ordering a truck with a whole logistics chain to move a box to your neighbor.
What about Externalizable? It’s faster than Serializable, yes, and gives more control. Technically, it can even be used in memory through ByteArrayOutputStream, without real files or sockets. With good implementation in regular JVM with optimizing JIT compiler, Externalizable can show excellent performance, sometimes even comparable to or exceeding Parcelable in pure object serialization/deserialization speed.
But in the context of Android IPC, the problems aren’t in Externalizable’s speed itself, but in the fact that it wasn’t designed for this task. First problem: data format is tied to Java serialization protocol with its service markers (TC_OBJECT, TC_ENDBLOCKDATA, etc.), which add extra bytes to each transmission. Second: writeExternal and readExternal calls go through the ObjectOutputStream/ObjectInputStream abstraction layer, even when working with ByteArrayOutputStream in memory. These streams aren’t integrated with Binder and require additional data copying. Third: this still creates more temporary objects and loads GC compared to direct writing to Parcel. Fourth: there’s no native integration with Android runtime (ART), while Parcel works directly with IPC mechanisms at the kernel level.
In other words, Externalizable is a fast mechanism for JVM, but not optimal for Android specifics, where each IPC operation must be maximally efficient, and Binder integration is critically important.
Binder works differently. It minimizes data copying between processes, using single copying through the Linux kernel. Data is written directly to a Parcel buffer, which is then passed through Binder driver with minimal overhead. This requires a serialization mechanism that “understands” this specificity and works directly with a binary buffer without intermediate abstraction layers.
That’s exactly why Parcelable was created. Conceptually it’s very similar to Externalizable: you implement two methods (writeToParcel and constructor from Parcel), you control yourself what and how you write, you’re responsible for read and write order. The idea of manual control over the serialization process clearly came from Externalizable. But the implementation is completely reworked for Android. Instead of streams, Parcel is used, which works with a flat, untyped binary buffer. Instead of Java serialization protocol, a minimalist format without type metadata is used. Instead of creating temporary objects, data is written directly to buffer, minimizing GC load.
An important feature: Parcel is an untyped buffer. It has no information about data types, no field names, no versioning. This means there’s no version compatibility (like Serializable with its serialVersionUID) here. You’re fully responsible for backward compatibility. If you change field order in writeToParcel and forget to update read order in the constructor, data will be read incorrectly, and you’ll get hard-to-catch bugs. In this regard, Serializable was more “forgiving”, automatically detecting version incompatibility.
Initially, Parcelable had to be written manually, which was tedious and error-prone. With Kotlin’s arrival, the situation changed. The kotlin-parcelize plugin (@Parcelize annotation) automatically generates all boilerplate code during compilation, guaranteeing correct field read and write order. This combined the control of Externalizable with the convenience of Serializable.
For those who immediately started recognizing Parcelable in Externalizable, yes, Externalizable became the philosophical foundation for Parcelable. Both say: “don’t trust automation, take control into your own hands”. But Parcelable goes further, discarding all JVM serialization baggage and creating a solution from scratch, optimized for Android specifics: minimal copying through kernel, absence of temporary objects, direct work with binary buffer without typing.
The Parcelable Interface
We just found out why Google couldn’t use existing JVM solutions for Android. Serializable is too slow due to reflection and creates excessive load on Garbage Collector through many temporary objects. Externalizable, though faster, isn’t integrated with Binder and is tied to Java serialization protocol with all its markers and metadata. A solution was needed, specifically designed for mobile platform: fast, compact, and working directly with Android’s Binder mechanism.
Parcelable became exactly such a solution. Let’s start with the interface and see what it requires from us:
public interface Parcelable {
int describeContents();
void writeToParcel(Parcel dest, int flags);
interface Creator<T> {
T createFromParcel(Parcel source);
T[] newArray(int size);
}
}Unlike Serializable, which was just an empty marker, here we see real methods that need to be implemented. The writeToParcel() method is responsible for writing object data to a special Parcel container, and describeContents() informs the system about the presence of special resources (more on this later). Additionally, each class must provide a CREATOR - a special object that knows how to create instances from Parcel.
Let’s try to implement our familiar Person class using Parcelable. Here’s how it looked before automatic generation appeared:
data class Person(
val name: String,
val dateOfBirth: Int,
val address: String
) : Parcelable {
constructor(parcel: Parcel) : this(
parcel.readString()!!,
parcel.readInt(),
parcel.readString()!!
)
override fun writeToParcel(parcel: Parcel, flags: Int) {
parcel.writeString(name)
parcel.writeInt(dateOfBirth)
parcel.writeString(address)
}
override fun describeContents(): Int = 0
companion object CREATOR : Parcelable.Creator<Person> {
override fun createFromParcel(parcel: Parcel): Person {
return Person(parcel)
}
override fun newArray(size: Int): Array<Person?> {
return arrayOfNulls(size)
}
}
}Look at this code carefully. For three simple fields, we had to write almost 40 lines of boilerplate code. We manually prescribe how to write each field in the writeToParcel() method, then read them in exactly the same order in the constructor from Parcel, and finally create a CREATOR with two methods. Moreover, write and read order must match absolutely precisely. If you accidentally write writeInt(dateOfBirth) before writeString(name), and do the opposite when reading, you’ll get a bug that will be very difficult to catch.
Android Studio tried to make developers’ lives easier by adding a ready template: it was enough to press Alt+Insert (or Cmd+N on Mac) and choose “Parcelable implementation” for the IDE to generate all necessary code. But this solved the problem only partially. As soon as you added a new field to the class or changed the order of existing ones, you had to manually update all serialization methods. Forgot to add a new field to writeToParcel? Get a silent bug in production.
The situation changed dramatically with Kotlin’s arrival. First, in the kotlinx-android-extensions plugin, the @Parcelize annotation appeared, which automatically generated the entire implementation during compilation. Now our Person class could be written like this:
@Parcelize
data class Person(
val name: String,
val dateOfBirth: Int,
val address: String
) : ParcelableThree lines instead of forty! One annotation, and the compiler itself generates all necessary code, guaranteeing correct write and read order.
True, kotlinx-android-extensions turned out to be too broad a plugin. Besides @Parcelize, it included synthetic imports for view (the famous import kotlinx.android.synthetic.main.*), which several years later were recognized as an anti-pattern and deprecated in favor of ViewBinding. As a result, the plugin was split, and @Parcelize moved to its own compact kotlin-parcelize module. Now to use it, it’s enough to add to build.gradle:
plugins {
id("kotlin-parcelize")
}And that’s it. No runtime dependencies, no additional libraries. All generation happens at the compiler level, creating optimal bytecode.
How This Works in Practice
Before diving into technical details, let’s look at real Parcelable usage in Android. If you noticed, unlike examples with Serializable and Externalizable where we created files and looked at their contents, we didn’t do this here. Why?
The reason is simple: Parcelable was created not for saving to files, but for passing data between Android components. Let’s look at a typical scenario: passing an object from one Activity to another.
// FirstActivity.kt
val person = Person("John Wick", 1964, "New York")
val intent = Intent(this, SecondActivity::class.java)
intent.putExtra("person_data", person) // person implements Parcelable
startActivity(intent)// SecondActivity.kt
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
val person = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.TIRAMISU) {
intent.getParcelableExtra("person_data", Person::class.java)
} else {
@Suppress("DEPRECATION")
intent.getParcelableExtra<Person>("person_data")
}
println(person) // Person(name=John Wick, dateOfBirth=1964, address=New York)
}What happens behind the scenes? When you call intent.putExtra(), Android serializes the Person object to Parcel, passes this buffer through Binder to a new process (if Activity starts in another process) or simply to a new component, and there deserializes it back. The whole process takes microseconds. No files, no input/output streams, no long-term storage.
But what if we still want to see what serialized data looks like and what really happens under the hood? Let’s look at the full Parcel work cycle - from serialization to deserialization:
val person = Person("John Wick", 1964, "New York")
val source = Parcel.obtain()
source.writeParcelable(person, 0)
val bytes = source.marshall()
source.recycle()
println("Size: ${bytes.size} bytes")
println("Data (hex): ${bytes.joinToString(" ") { "%02x".format(it) }}")
val readable = bytes.filter { it in 32..126 }.map { it.toInt().toChar() }.joinToString("")
println("Readable characters: $readable")
val destination = Parcel.obtain()
destination.unmarshall(bytes, 0, bytes.size)
destination.setDataPosition(0)
val classLoader = Person::class.java.classLoader
val result = destination.readParcelable<Person>(classLoader)
destination.recycle()
println("Restored object: $result")Output on Emulator Pixel 6 API 24 will show:
Data (hex): 1c 00 00 00 6b 00 7a 00 2e 00 61 00 70 00 70 00 6c 00 69 00 63 00 61 00 74 00 69 00 6f 00 6e 00 2e 00 74 00 61 00 72 00 6c 00 61 00 6e 00 2e 00 50 00 65 00 72 00 73 00 6f 00 6e 00 00 00 00 00 09 00 00 00 4a 00 6f 00 68 00 6e 00 20 00 57 00 69 00 63 00 6b 00 00 00 ac 07 00 00 08 00 00 00 4e 00 65 00 77 00 20 00 59 00 6f 00 72 00 6b 00 00 00 00 00
Readable characters: kz.application.tarlan.PersonJohn WickNew York
Restored object: Person(name=John Wick, dateOfBirth=1964, address=New York)What’s happening here? First, we get a Parcel instance through obtain() - this isn’t creating a new object, but getting one from a pool. Parcel uses object pooling to minimize allocations. Then we call writeParcelable(), which in turn calls our writeToParcel() method from generated code. The marshall() method returns raw ByteArray - the contents of the internal buffer. After finishing work, we must call recycle(), returning Parcel to the pool.
For deserialization, the process is reverse: get a new Parcel, call unmarshall() to load bytes into buffer, reset read position to beginning through setDataPosition(0), and read the object back through readParcelable(), passing ClassLoader for loading the needed class. And again don’t forget recycle().
Important warning: this example is shown exclusively to demonstrate what’s inside Parcel. Android documentation explicitly warns: data obtained through marshall() should not be used for long-term storage. You cannot save it to disk, send over network, store in database or SharedPreferences. Parcel format is highly optimized specifically for local IPC and doesn’t guarantee compatibility between different Android platform versions. If you need to save data for a long time, use standard serialization (Serializable, kotlinx.serialization) or other general-purpose mechanisms.
What do we see in the hex dump? First bytes 1c 00 00 00 - this is the class name length (28 characters). Then comes the full class name kz.application.tarlan.Person in UTF-16 format (each character takes 2 bytes, hence all those 00 between letters). After this come field data: string length, the string “John Wick” itself in UTF-16, the number 1964 (ac 07 in little-endian), and the string “New York” also in UTF-16.
Parcel writes the fully qualified class name including package. Strings are stored in UTF-16. If we used writeToParcel() directly without writing the class name through writeParcelable(), there would be less data. But for IPC, the full class name is necessary for correct deserialization on the receiving side. In the Intent context between Activities, data lives microseconds in memory, so overhead is negligible. But this is another argument against using marshall() for storage - the format contains platform-specific details like full class names.
By the way, Serializable also works in Android through Intent:
// This is also valid code, if Person implements Serializable
val person = Person("John Wick", 1964, "New York")
intent.putExtra("person_data", person) // Android supports Serializable tooBut behind the scenes, something completely different happens. Android is forced to create ObjectOutputStream, start reflection, write all metadata, create many temporary objects. On the other side, the same thing: ObjectInputStream, reflection, metadata parsing, object creation. The result is the same, but works slower and creates significant load on Garbage Collector. That’s exactly why in Android documentation you’ll see the recommendation everywhere: use Parcelable for passing data between components.
But let’s return to the question: what is this Parcel that we write data to? Remember, we talked about how Android needs a mechanism working directly with memory, without intermediate abstraction layers? Here’s where it gets most interesting. Parcel is not just another Java class for working with data. It’s a thin wrapper over a native C++ structure, and it works through JNI (Java Native Interface):
public final class Parcel {
private long mNativePtr; // Pointer to native structure
public final void writeString(String val) {
nativeWriteString(mNativePtr, val);
}
private static native void nativeWriteString(long nativePtr, String val);
}Notice the mNativePtr field - it’s just a long type number that stores a pointer to a C++ structure. When you call parcel.writeString("John Wick"), on the Java side only call redirection to the native nativeWriteString() method happens. And then C++ code work begins.
The native implementation is located in the frameworks/native/libs/binder/Parcel.cpp file in Android sources. This code works directly with memory: the string is converted to UTF-16, length information is added to it, and all this is written to a linear memory buffer. No temporary Java objects, no abstraction layers like ObjectOutputStream, just byte writing to memory.
Now imagine what happens when you pass an object through Intent.putExtra() between Activities. This memory buffer is sent through Binder driver, which works at the Linux kernel level. Binder uses copy-on-write mechanism, minimizing data copying. On the receiving side, a new Parcel is created, which receives a pointer to this memory buffer, and you simply read data from it in the same order you wrote. Recall ObjectOutputStream with its abstraction layers, protocols and metadata - there’s nothing like that here. Only memory, pointers, and minimal overhead.
Let’s now look at exactly what code the compiler generates for our simple Person class with the @Parcelize annotation. If we open the compiled .class file through a decompiler, we’ll see something like this:
@Parcelize
data class Person(
val name: String,
val dateOfBirth: Int,
val address: String
) : Parcelable {
// Generated by compiler
override fun writeToParcel(parcel: Parcel, flags: Int) {
parcel.writeString(name)
parcel.writeInt(dateOfBirth)
parcel.writeString(address)
}
override fun describeContents(): Int = 0
companion object {
@JvmField
val CREATOR = object : Parcelable.Creator<Person> {
override fun createFromParcel(parcel: Parcel): Person {
return Person(
parcel.readString()!!,
parcel.readInt(),
parcel.readString()!!
)
}
override fun newArray(size: Int): Array<Person?> {
return arrayOfNulls(size)
}
}
}
}Look at the operation order: when writing, we call writeString, then writeInt, then writeString again. When reading, the order is absolutely identical: readString, readInt, readString. This isn’t coincidence or whim. This is a critically important requirement because Parcel is an untyped buffer, a flat byte array without any information about data types.
When you call parcel.readInt(), it simply takes the next 4 bytes from the buffer and interprets them as integer. There’s no check “is this really an int?”. If you accidentally break the order - for example, first write int, and when reading try to read string, you’ll get completely incorrect data or app crash. That’s exactly why manual Parcelable implementation was so dangerous: one mistake, and a bug is ready.
With @Parcelize, this problem is solved at the compiler level. It analyzes the primary constructor, generates write and read methods in correct order, and guarantees their synchronization. You can’t make a mistake because you don’t write code manually.
Now let’s consider a more complex case: nullable fields. How does Parcel work with null values if it’s just bytes in memory without type metadata?
@Parcelize
data class Person(
val name: String,
val dateOfBirth: Int,
val address: String?
) : ParcelableFor nullable address, code with a check is generated:
override fun writeToParcel(parcel: Parcel, flags: Int) {
parcel.writeString(name)
parcel.writeInt(dateOfBirth)
parcel.writeString(address) // Even if address == null, this works!
}The answer is simple and elegant: writeString() has built-in null support. When you pass null, the method writes a special marker - the value -1 as string length. When reading, readString() sees this marker and returns null. It turns out Android supported null-safety at its API level even before Kotlin made this concept central to the language. Interestingly, Android developers initially laid null-safety support, although Java didn’t know this at all.
Now let’s talk about the describeContents() method, which in our example simply returns 0. You might have noticed we never override it, the compiler generates it automatically. Why is it even needed? In 99% of cases, you really just need to return 0. But there’s one special scenario: file descriptors.
Imagine your class contains ParcelFileDescriptor - this could be an open file, socket, or other system resource. Such resources require special handling when passing between processes because these aren’t just data in memory, these are real operating system objects. In such a case, you need to return Parcelable.CONTENTS_FILE_DESCRIPTOR so Binder understands the object contains system resources and handles them correctly:
@Parcelize
data class FileWrapper(val fd: ParcelFileDescriptor) : Parcelable {
override fun describeContents(): Int = Parcelable.CONTENTS_FILE_DESCRIPTOR
}The second parameter of writeToParcel(Parcel dest, int flags) method is flags. In most cases it’s ignored, but can contain the Parcelable.PARCELABLE_WRITE_RETURN_VALUE flag. This flag says the object is being passed as a return value from Binder call, and after writing some resources can be freed because they’re no longer needed on the sending side.
Now let’s look at a more complex scenario: nested objects. In real applications, we rarely work with simple classes of three primitive fields. Usually we have entire object graphs where one class contains other classes. For example, Person might contain an Address object:
@Parcelize
data class Address(val city: String, val street: String) : Parcelable
@Parcelize
data class Person(
val name: String,
val dateOfBirth: Int,
val address: Address
) : ParcelableWhat happens during serialization? When the compiler reaches the address field in the Person class, it generates a call to parcel.writeParcelable(address, flags). This method in turn calls address.writeToParcel(), and the entire nested object is serialized recursively. During deserialization, the reverse process occurs: parcel.readParcelable<Address>(Address::class.java.classLoader) reads data and recreates the Address object. All recursion is handled automatically, without runtime overhead, because everything is generated at compile time.
Now let’s consider another common case: collections. Lists, sets, maps - standard data structures in any application. How does Parcel work with them?
@Parcelize
data class Team(
val name: String,
val members: List<String>
) : ParcelableCode is generated:
override fun writeToParcel(parcel: Parcel, flags: Int) {
parcel.writeString(name)
parcel.writeStringList(members)
}For collections, Parcel provides specialized optimized methods. writeStringList() first writes list size as integer, then sequentially writes each string. When reading, readStringList() first reads size, creates ArrayList of needed capacity, then reads strings one by one. Similarly work writeIntArray(), writeParcelableList(), writeMap() and many other methods for various collection types. Each is optimized for a specific data type, making serialization maximally efficient.
But @Parcelize has an important limitation you need to remember. The compiler generates code only based on the primary constructor. If you have properties declared outside the constructor, they’ll simply be ignored:
@Parcelize
data class Person(
val name: String,
val dateOfBirth: Int
) : Parcelable {
var address: String = "" // This field will NOT be serialized!
}Why? Because the compiler analyzes only the primary constructor signature. It doesn’t and cannot know about properties you initialize in the class body or through init blocks. If you need to serialize such a property, simply add it to the primary constructor.
Sometimes situations arise where automatic generation isn’t enough. For example, you need to serialize a class from a third-party library that doesn’t implement Parcelable. Or special serialization logic is needed - say, encrypting data before writing. For such cases, there’s a Parceler interface that allows defining custom logic:
data class Person(
val name: String,
val dateOfBirth: Int,
val address: String
)
object PersonParceler : Parceler<Person> {
override fun create(parcel: Parcel): Person {
return Person(
parcel.readString()!!,
parcel.readInt(),
parcel.readString()!!
)
}
override fun Person.write(parcel: Parcel, flags: Int) {
parcel.writeString(name)
parcel.writeInt(dateOfBirth)
parcel.writeString(address)
}
}
@Parcelize
@TypeParceler<Person, PersonParceler>
data class Team(
val name: String,
val leader: Person
) : ParcelableYou get full control over the serialization process of a specific type, while other class fields are handled automatically.
We’ve spent a lot of time explaining the internal structure of Parcelable, and you might have noticed how much it differs from Serializable. Native C++ implementation, direct memory work without intermediate objects, absence of reflection, integration with Binder at Linux kernel level. All these architectural decisions were made for a reason - they provide the performance necessary for real-time IPC. We’ll see specific numbers in the benchmarks section.
But it would be unfair to tell only about advantages. Parcelable has serious limitations you need to understand.
Platform binding. Parcelable only works on Android. This isn’t a cross-platform solution. Imagine you’re developing a mobile application with common business layer for Android and iOS. Your data models should work on both platforms. But Parcelable only exists in Android SDK. On iOS it simply doesn’t exist. Moreover, with the growth of Kotlin Multiplatform popularity, there appeared a need for unified code that works everywhere: on Android, iOS, in browser through Kotlin/JS, on backend through Kotlin/JVM, even in native applications through Kotlin/Native. Parcelable doesn’t fit this picture at all.
Absence of versioning. Remember serialVersionUID in Serializable? There’s no such mechanism here. If you change class structure between app versions - add a field, remove a field, change order - you’ll have to manually handle compatibility, as we saw in the Externalizable example. This is your responsibility, and there will be no automatic checking.
Limited applicability. Parcel is optimized for IPC but not for long-term storage. You can save serialized Parcel to file or SharedPreferences, technically it’s possible. But you shouldn’t do this. Parcel format can change between Android versions, and your saved data will become unreadable after system update. Parcel was created for passing data between components here and now, not for disk storage.
Absence of format choice. Parcelable serializes data into one single binary format optimized for Binder. But what if you need to send data over network? Modern APIs exchange JSON or Protocol Buffers. What if you need to save configuration in human-readable form? Need YAML or TOML. Parcelable isn’t suitable for this. It solves one task - IPC in Android, and solves it brilliantly. But only this one task.
These very limitations created demand for a universal solution. Imagine the ideal: a library that works on all Kotlin platforms, supports multiple data formats (JSON, Protobuf, CBOR, XML), uses code generation for maximum performance, ensures type-safety at compiler level, and remains as simple to use as @Parcelize.
Sounds too good to be true? But exactly such a solution was created by the JetBrains team. Meet the final hero of our story.
kotlinx.serialization
History of Emergence and Philosophy

In 2017, Kotlin was experiencing a real boom. Google announced it as the official language for Android development, the community was actively growing, and JetBrains began an ambitious project - Kotlin Multiplatform (KMP). The idea was revolutionary: write code once and run it everywhere. On Android through Kotlin/JVM and Android Runtime, on iOS through Kotlin/Native with compilation to native code, in browser through Kotlin/JS, on server through regular JVM. But to realize this idea, one critically important element was missing.
Imagine a developer writing a mobile application with common business layer. Data models, network requests, API work - all this should work identically on Android and iOS. On Android they have Parcelable for IPC, there’s Gson or Moshi for JSON, there are many ready solutions. But as soon as this code is compiled for iOS through Kotlin/Native, everything breaks. Parcelable doesn’t exist. Gson uses reflection, which works completely differently (or doesn’t work at all) in native environment. Moshi requires code generation through KAPT, which isn’t supported in Kotlin/Native.
A vicious circle emerged: KMP promised “write once, run everywhere”, but for a basic task - data serialization - you had to write different code for each platform. JSON parsing on Android was solved by one library, on iOS by another, in JS by a third. But serialization is a fundamental operation without which no project can do.
The JetBrains team understood: for Kotlin Multiplatform success, a cross-platform serialization library is needed that works equally well on all supported platforms. But simply “yet another serialization library” would be a half measure. Something fundamentally new was needed, taking into account Kotlin’s unique features as a language and the experience of all previous solutions.
In 2018, the first version of kotlinx.serialization appeared in experimental status. The library immediately stood out with its approach. Unlike Gson, which uses runtime reflection, kotlinx.serialization works entirely through compiler plugin. The @Serializable annotation triggers generation of specialized code at compile time. This means several important things.
Performance. No runtime reflection, no class structure analysis during execution. Everything is already ready and optimized at compile time. In JVM this gives speed comparable to Moshi or even exceeding it. In Kotlin/Native, where reflection is limited and slow, this is critically important.
Type safety. The compiler analyzes your data structure and generates type-safe code. If you try to serialize a type for which there’s no serializer, you’ll get a compilation error, not a runtime exception in production. This is a huge advantage over reflection-based solutions where errors manifest only during execution.
Cross-platform nature. The compiler plugin works on all Kotlin target platforms. The same code with @Serializable compiles into optimal bytecode for JVM, into JavaScript for browser, into native code for iOS. There are no platform-specific dependencies, no API differences between platforms.
Format multiplicity. Unlike Parcelable, which works with only one format, kotlinx.serialization is modularly structured. There’s the library core that defines how class structure turns into a sequence of write and read operations. And format is a separate module. Want JSON? Connect kotlinx-serialization-json. Need Protobuf? kotlinx-serialization-protobuf. CBOR for compact binary representation? kotlinx-serialization-cbor. The same class with one @Serializable annotation can be serialized into any of these formats without code changes.
By 2020, the library came out of experimental status and reached version 1.0, becoming stable and ready for production use. Today it’s the de facto standard for serialization in Kotlin Multiplatform projects and a serious alternative to Gson/Moshi in pure JVM/Android applications.
Let’s see how this works in practice.
How This Looks in Practice
Let’s take our familiar Person class and see how serialization looks:
@Serializable
data class Person(
val name: String,
val dateOfBirth: Int,
val address: String
)
fun main() {
val person = Person("John Wick", 1964, "New York")
val json = Json.encodeToString(person)
println(json)
}Output:
{"name":"John Wick","dateOfBirth":1964,"address":"New York"}One @Serializable annotation, and the class is ready for serialization. The syntax resembles @Parcelize, but works differently and on all platforms. Notice the result: this isn’t binary format with metadata from Serializable, not flat buffer from Parcel, but pure JSON.
But JSON is only one of the formats. Remember, we talked about modularity? The same class can be serialized to Protocol Buffers, CBOR, or even XML:
val person = Person("John Wick", 1964, "New York")
val json = Json.encodeToString(person)
val protobuf = ProtoBuf.encodeToByteArray(Person.serializer(), person)
val cbor = Cbor.encodeToByteArray(Person.serializer(), person)JSON is readable and widely supported. Protocol Buffers is compact binary format for efficient data transmission. CBOR is format similar to MessagePack, occupying intermediate position. Format choice depends on task: use JSON for API, Protobuf or CBOR for mobile cache or network protocols, JSON or YAML for configurations.
And most interesting: the library is open for extension. If you need a specific format - say, YAML, TOML, or your own proprietary protocol - you can implement your own Encoder and Decoder. The API is designed so serializers generated for your classes will work with any encoder. The community has already created many formats: kotlinx-serialization-hocon for configs, kotlinx-serialization-properties for Java Properties files, there’s even experimental YAML. This is a unique opportunity: one class with one annotation works with dozens of formats without code changes.
Deserialization works mirror-like for any format:
val json = """{"name":"John Wick","dateOfBirth":1964,"address":"New York"}"""
val person = Json.decodeFromString<Person>(json)
println(person)Output: Person(name=John Wick, dateOfBirth=1964, address=New York)
Internal Structure: Compiler Plugin
Remember how Serializable uses reflection through ObjectOutputStream, how Externalizable requires manual method implementation, how Parcelable generates code through @Parcelize? With kotlinx.serialization the approach is fundamentally different.
When you add @Serializable annotation, the compiler plugin generates a special serializer. If we decompile bytecode, we’ll see approximately the following structure:
object PersonSerializer : KSerializer<Person> {
override val descriptor: SerialDescriptor = buildClassSerialDescriptor("Person") {
element<String>("name")
element<Int>("dateOfBirth")
element<String>("address")
}
override fun serialize(encoder: Encoder, value: Person) {
val composite = encoder.beginStructure(descriptor)
composite.encodeStringElement(descriptor, 0, value.name)
composite.encodeIntElement(descriptor, 1, value.dateOfBirth)
composite.encodeStringElement(descriptor, 2, value.address)
composite.endStructure(descriptor)
}
override fun deserialize(decoder: Decoder): Person {
// Field reading and object creation logic
}
}Key difference from all previous approaches: the serialize method accesses fields directly - value.name, value.dateOfBirth, value.address. No reflection, no getDeclaredFields() or field.setAccessible(true). The compiler knows class structure and generates direct calls. This gives performance comparable to manual implementation in Externalizable, but without need to write code manually.
Second important difference: architecture separates “what to serialize” and “how to serialize”. PersonSerializer knows nothing about JSON, it simply calls abstraction Encoder methods. And specific format (JSON, Protobuf, CBOR) is determined by encoder instance:
val person = Person("John Wick", 1964, "New York")
// Same serializer, different formats
val json = Json.encodeToString(person)
val protobuf = ProtoBuf.encodeToByteArray(Person.serializer(), person)
val cbor = Cbor.encodeToByteArray(Person.serializer(), person)Try doing this with Parcelable or Serializable - they’re tightly bound to their format.
Control Over Serialization Process
As with Serializable, sometimes you need to intervene in the serialization process. Imagine you have a field with cached data or temporary calculations that don’t need saving. In Serializable we used transient keyword, here the @Transient annotation works:
@Serializable
data class User(
val id: String,
val username: String,
@Transient val cachedAvatar: Bitmap? = null,
@Transient var lastAccessTime: Long = 0L
)During serialization, cachedAvatar and lastAccessTime fields will be ignored. JSON will contain only id and username. Notice: transient fields must have default values, otherwise during deserialization the compiler won’t be able to create object.
Another frequent problem: your Kotlin classes use camelCase, but API server requires snake_case. This is classic pain when integrating with backends. In Serializable, Externalizable and Parcelable there’s no such capability at all - these mechanisms work directly with class field names. You’d have to either write fields in code as user_id (violating Kotlin conventions), or create separate DTO layer with mapping, or in case of Externalizable write kilometers of code in writeExternal/readExternal.
This is one area where kotlinx.serialization more resembles libraries like Gson or Moshi, but with important difference: compile-time checking. In Gson, @SerializedName annotation is processed at runtime through reflection, here the compiler plugin generates code immediately. The @SerialName annotation is sufficient:
@Serializable
data class ApiResponse(
@SerialName("user_id") val userId: String,
@SerialName("created_at") val createdAt: Long,
@SerialName("is_active") val isActive: Boolean
)During serialization it will be {"user_id":"123","created_at":1698765432,"is_active":true}, but in Kotlin code you continue working with idiomatic names userId, createdAt, isActive. No separate DTO classes needed, no mapping layer, no extensions like MapStruct or ModelMapper. Among all four serialization approaches we’ve considered, only kotlinx.serialization provides such flexibility out of the box.
Third problem: API evolves, new fields are added, old ones become optional. In Serializable versioning is solved through serialVersionUID, but it’s a fragile mechanism. Here nullable types and default values work:
@Serializable
data class User(
val id: String,
val username: String,
val email: String? = null,
val premium: Boolean = false
)The email field is marked nullable with default value null - it can be absent in JSON. The premium field has default value false - if it’s not in JSON, default value is used. If JSON contains {"id":"123","username":"john"}, library will create object with email = null and premium = false. If JSON contains all fields, values from it are used.
But if you try to deserialize JSON without required field:
val json = """{"username":"john"}"""
val user = Json.decodeFromString<User>(json)You’ll get exception: SerializationException: Field 'id' is required for type 'User', but it was missing. Type-safety works at runtime too. Unlike Gson, which silently substitutes null even for non-null type (and app crashes later with NullPointerException in production), here you get understandable exception immediately during deserialization.
Working With Types Outside Your Control
A serious problem arises when you need to serialize a class you can’t put @Serializable on. This could be a class from third-party library, legacy Java code, or standard types like java.util.Date. In Serializable such classes simply don’t serialize correctly or create huge binary data.
Good news: for many common types, serializers already exist. UUID, BigDecimal, BigInteger, kotlinx.datetime types - all supported out of the box through separate modules. For example, to work with UUID just add kotlinx-serialization-core dependency and use built-in UUIDSerializer. For java.time types there’s special kotlinx-serialization-json-jvm module with ready serializers for LocalDateTime, Instant, Duration and others.
But if you need a type for which there’s no serializer, or need specific logic (for example, encryption before writing), you can create custom serializer. In kotlinx.serialization the KSerializer<T> interface exists for this. Create object implementing it, and specify exactly how to serialize and deserialize this type:
object DateAsLongSerializer : KSerializer<Date> {
override val descriptor = PrimitiveSerialDescriptor("Date", PrimitiveKind.LONG)
override fun serialize(encoder: Encoder, value: Date) {
encoder.encodeLong(value.time)
}
override fun deserialize(decoder: Decoder): Date {
return Date(decoder.decodeLong())
}
}Now you can use this serializer for Date type fields:
@Serializable
data class Event(
val title: String,
@Serializable(with = DateAsLongSerializer::class)
val timestamp: Date
)Date will serialize as simple number (unix timestamp), not as object with all internal fields. During deserialization, number automatically turns back into Date. This works for any types: UUID working libraries, custom classes from closed-source dependencies, Java collections with specific logic.
Polymorphism and Sealed Classes
One of the most powerful capabilities that doesn’t exist in Serializable, Parcelable, or Externalizable without huge amount of boilerplate code. Imagine API that returns different event types:
@Serializable
sealed class Event {
abstract val timestamp: Long
}
@Serializable
@SerialName("user_login")
data class UserLoginEvent(
override val timestamp: Long,
val userId: String
) : Event()
@Serializable
@SerialName("purchase")
data class PurchaseEvent(
override val timestamp: Long,
val amount: Double,
val currency: String
) : Event()
@Serializable
data class EventLog(val events: List<Event>)Sealed classes are closed type hierarchy, compiler knows all possible subtypes. During serialization, library automatically adds discriminator field "type" with specific class name:
val log = EventLog(
events = listOf(
UserLoginEvent(1698765432000, "user123"),
PurchaseEvent(1698765433000, 99.99, "USD")
)
)
val json = Json.encodeToString(log)Result:
{
"events": [
{"type":"user_login","timestamp":1698765432000,"userId":"user123"},
{"type":"purchase","timestamp":1698765433000,"amount":99.99,"currency":"USD"}
]
}During deserialization, library looks at "type" field, understands which specific subclass to create, and restores correct hierarchy:
val log = Json.decodeFromString<EventLog>(json)
when (val event = log.events[0]) {
is UserLoginEvent -> println("User ${event.userId} logged in")
is PurchaseEvent -> println("Purchase: ${event.amount} ${event.currency}")
}Type-safety is preserved completely. Compiler understands that events list can contain only Event subtypes, and sealed class guarantees all possible variants are known. If unknown type comes in JSON, you’ll get exception. If structure doesn’t match, also exception. Try implementing this with Serializable - you’ll have to write kilometers of code with type checks, instanceof, casts, and manual deserialization routing.
With all its advantages, kotlinx.serialization has limitations you need to know about.
Compiler plugin requirement. Unlike Gson or Jackson, which are simply added to dependencies, here a compiler plugin is needed. For most projects this isn’t a problem, but in specific scenarios with limited compilation control it can become an obstacle.
Class structure limitations. As with @Parcelize, only properties from primary constructor are serialized. Properties in class body are ignored. The compiler plugin analyzes constructor signature, not entire class structure.
Code size. Generating serializers for each class increases final APK/JAR size. In large applications with hundreds of data classes this can be noticeable. Price for performance and type-safety.
Versioning. There’s no built-in versioning mechanism like serialVersionUID in Serializable. Backward compatibility is ensured through default values and nullable types, but requires attention when schema evolves.
At the same time, kotlinx.serialization remains the only full-fledged cross-platform solution for Kotlin. One code works on JVM, JS, Native with same performance and type-safety guarantees.
We’ve covered all four serialization approaches: Serializable with its reflection and Java legacy, Externalizable with full manual control, Parcelable with native optimization for Android IPC, and kotlinx.serialization with cross-platform nature and multiple formats. Each has its strengths and weaknesses, each solves a specific class of tasks.
But it’s time to move from theory to practice. We’ve talked a lot about performance, data size, overhead. It’s time to check these statements with concrete numbers.
Benchmarks: Performance Comparison
Throughout the article we’ve discussed differences in approaches: Serializable is slow due to reflection, Externalizable is faster thanks to manual control, Parcelable is optimized for Android IPC, kotlinx.serialization uses code generation. We’ve talked about data size, metadata impact on final volume, differences between text and binary formats. But all these were theoretical considerations or general statements.
Time has come to conduct systematic testing and get concrete metrics. We’ll measure four key parameters:
- Serialization speed - how much time is required to convert object to bytes
- Deserialization speed - how much time is required to restore object from bytes
- Data size - how many bytes serialized representation takes
- Number of allocations - how many objects are created in the process
The fourth parameter deserves special attention. The number of memory allocations in many scenarios is more important than speed or data size. Why? Because each allocation is not just memory allocation, it’s future work for Garbage Collector. You can write code that executes in microseconds and creates compact data representation, but if hundreds of intermediate objects had to be created for this, the operation cost increases many times. These objects load heap, provoke GC pauses, fragment memory. On mobile devices where memory is limited and energy efficiency is critical, frequent GC cycles directly affect battery life and interface smoothness. Low allocation count indirectly indicates good solution architecture: efficient buffer usage, object reuse, absence of excessive copying. Therefore, when we talk about serialization optimality, we look not only at execution time, but also at how much garbage it leaves behind.
Testing is conducted on Android device to get realistic data for mobile development.
For performance measurement we use Jetpack Benchmark library - official tool from Google for accurate Android code performance measurement. Library automatically performs warmup iterations for JIT compiler warmup, then runs many measurements, discards outliers, and calculates statistically significant results.
Test classes are chosen realistic - user model with various data types:
@[kotlinx.serialization.Serializable Parcelize]
data class User1(
var id: String ,
var name: String,
var email: String,
var age: Int ,
var isActive: Boolean ,
var registrationDate: Long ,
var tags: List<String> = emptyList()
) : Serializable, Parcelable {
companion object {
private const val serialVersionUID = 1L
}
}
data class User2(
var id: String = "",
var name: String = "",
var email: String = "",
var age: Int = 0,
var isActive: Boolean = false,
var registrationDate: Long = 0L,
var tags: List<String> = emptyList()
) : Externalizable {
override fun writeExternal(out: ObjectOutput) {
out.writeUTF(id)
out.writeUTF(name)
out.writeUTF(email)
out.writeInt(age)
out.writeBoolean(isActive)
out.writeLong(registrationDate)
out.writeInt(tags.size)
tags.forEach { out.writeUTF(it) }
}
override fun readExternal(input: ObjectInput) {
id = input.readUTF()
name = input.readUTF()
email = input.readUTF()
age = input.readInt()
isActive = input.readBoolean()
registrationDate = input.readLong()
val size = input.readInt()
tags = List(size) { input.readUTF() }
}
}Each test runs thousands of times, results are averaged. We measure time in nanoseconds for accuracy. Data size is measured in bytes after full serialization.
For fair comparison, it’s critically important to use identical data in all tests. We created two classes: User1 and User2. Both have absolutely identical structure (seven fields: id, name, email, age, isActive, registrationDate, tags) and are filled with identical values (user John Wick with age 55, email “john.wick@continental.com” and list of four tags).
Why two classes, not one? The reason is technical: Externalizable and Serializable serialize with the same API through ObjectOutputStream. Under the hood, the check first verifies if class implements Externalizable, and if yes, uses its logic, ignoring Serializable. If we created one class implementing both interfaces, then in the Serializable test, actually Externalizable code would execute, making comparison incorrect. Therefore User1 is used for Serializable, Parcelable and kotlinx.serialization, while User2 only for Externalizable.
The name difference (User1 vs User2) is one character. Since class name is written to serialized data (especially in Serializable and Parcelable), this affects size, but impact is minimal: one digit takes same number of bytes in any encoding. Thus, we maintain objectivity in data size comparison.
For overall assessment we’ll immediately serialize and deserialize for each method, and evaluate total time of these two processes. We use Microbenchmark from Jetpack Benchmark library. This configuration is specifically designed for measuring micro-operations that execute in microseconds or nanoseconds. Library runs each test thousands of times (iteration count depends on execution speed), automatically determines optimal warmup iteration count for JIT compiler stabilization, collects statistics (minimum, maximum, median, coefficient of variation), discards outliers and produces statistically significant results. Median value is used instead of average, as median is more resistant to outliers caused by background system processes or garbage collection. The benchmark class is as follows:
@RunWith(AndroidJUnit4::class)
class SerializationBenchmark {
@OptIn(ExperimentalBenchmarkConfigApi::class)
@get:Rule
val benchmarkRule = BenchmarkRule(MicrobenchmarkConfig(traceAppTagEnabled = true))
var user1 = User1(
id = "user_123456789",
name = "John Wick",
email = "john.wick@continental.com",
age = 55,
isActive = true,
registrationDate = 1672531200000L,
tags = listOf("assassin", "legendary", "baba_yaga", "continental")
)
var user2 = User2(
id = "user_123456789",
name = "John Wick",
email = "john.wick@continental.com",
age = 55,
isActive = true,
registrationDate = 1672531200000L,
tags = listOf("assassin", "legendary", "baba_yaga", "continental")
)
@Test
fun javaSerializable() = benchmarkRule.measureRepeated {
// Serialization
val baos = ByteArrayOutputStream()
ObjectOutputStream(baos).use { it.writeObject(user1) }
val serialized = baos.toByteArray()
// Deserialization
ByteArrayInputStream(serialized).use { bais ->
ObjectInputStream(bais).use { it.readObject() as User1 }
}
}
@Test
fun javaExternalizable() = benchmarkRule.measureRepeated {
// Serialization
val baos = ByteArrayOutputStream()
ObjectOutputStream(baos).use { it.writeObject(user2) }
val serialized = baos.toByteArray()
// Deserialization
ByteArrayInputStream(serialized).use { bais ->
ObjectInputStream(bais).use { it.readObject() as User2 }
}
}
@[Test OptIn(ExperimentalSerializationApi::class)]
fun kotlinxSerializable() = benchmarkRule.measureRepeated {
// Serialization
val protobufArray = ProtoBuf.encodeToByteArray(User1.serializer(), user1)
// Deserialization
val result: User1 = ProtoBuf.decodeFromByteArray(User1.serializer(), protobufArray)
}
@Test
fun androidParcelable() = benchmarkRule.measureRepeated {
// Serialization
val source = Parcel.obtain()
source.writeParcelable(user1, 0)
val bytes = source.marshall()
source.recycle()
// Deserialization
val destination = Parcel.obtain()
destination.unmarshall(bytes, 0, bytes.size)
destination.setDataPosition(0)
val classLoader = User1::class.java.classLoader
val result: User1? = destination.readParcelable<User1>(classLoader, User1::class.java)
destination.recycle()
}
}Notice several important implementation details of this benchmark that ensure comparison fairness. For Java Serializable and Externalizable we removed intermediate file creation that’s usually demonstrated in educational examples. Instead, we use ByteArrayOutputStream and ByteArrayInputStream to work directly with byte arrays in memory. Why is this important? Because when passing between processes and components in Android, there’s no intermediate layer for writing to file. Data is passed through memory. Parcelable works exactly with bytes in memory through Parcel buffer, and for comparison to be correct, Java Serializable and Externalizable must also work with byte arrays without disk operations.
The second important point concerns format choice for kotlinx.serialization. We use ProtoBuf, not JSON, though JSON is much more popular and more often associated with this library. Reason is simple: JSON is text format that requires additional costs for string parsing, escape sequence processing, number conversion from text representation to binary. ProtoBuf is binary format, conceptually analogous to what Serializable, Externalizable and Parcelable use. They all work with binary data representation. If we compared JSON with binary formats, kotlinx.serialization would look worse not because of library quality itself, but solely due to text representation features. Therefore, for fair comparison, ProtoBuf is chosen. This demonstrates real capabilities of kotlinx.serialization code generation without artificial handicap in form of text format.
All tests measure full cycle: object serialization to bytes + immediate deserialization back to object. This is realistic scenario for Android IPC, where data is serialized on sender side and immediately deserialized on receiver side. Here are the test results:
Serialization Results: Byte Representations
For complete understanding of differences between approaches, let’s look at how serialized data of our User object looks in each format.
Java Serializable:
Size: 388 bytes
String representation (all bytes): ¬í??sr??kz.android.benchmark.User1????????????????I??ageZ??isActiveJ??registrationDateL??emailt??Ljava/lang/String;L??idq??~??L??nameq??~??L??tagst??Ljava/util/List;xp??????7????j È??t??john.wick@continental.comt??user_123456789t?? John Wicksr??java.util.Arrays$ArrayListÙ¤<¾ÍÒ??[??at??[Ljava/lang/Object;xpur??[Ljava.lang.String;ÒVçé{G????xp??????t??assassint?? legendaryt?? baba_yagat??continental
Hex representation (all bytes): aced00057372001a6b7a2e616e64726f69642e62656e63686d61726b2e557365723100000000000000010200074900036167655a000869734163746976654a0010726567697374726174696f6e446174654c0005656d61696c7400124c6a6176612f6c616e672f537472696e673b4c0002696471007e00014c00046e616d6571007e00014c0004746167737400104c6a6176612f7574696c2f4c6973743b78700000003701000001856aa0c8007400196a6f686e2e7769636b40636f6e74696e656e74616c2e636f6d74000e757365725f3132333435363738397400094a6f686e205769636b7372001a6a6176612e7574696c2e4172726179732441727261794c697374d9a43cbecd8806d20200015b0001617400135b4c6a6176612f6c616e672f4f626a6563743b7870757200135b4c6a6176612e6c616e672e537472696e673badd256e7e91d7b47020000787000000004740008617373617373696e7400096c6567656e64617279740009626162615f7961676174000b636f6e74696e656e74616c
Statistics: Zero bytes: 46, ASCII characters: 292Java Externalizable:
Size: 166 bytes
String representation (all bytes): ¬í??sr??kz.android.benchmark.User2¼xÁL±????xpwt??user_123456789?? John Wick??john.wick@continental.com??????7????j È??????????assassin?? legendary?? baba_yaga??continentalx
Hex representation (all bytes): aced00057372001a6b7a2e616e64726f69642e62656e63686d61726b2e5573657232bc95789dc1924cb10c000078707774000e757365725f31323334353637383900094a6f686e205769636b00196a6f686e2e7769636b40636f6e74696e656e74616c2e636f6d0000003701000001856aa0c800000000040008617373617373696e00096c6567656e646172790009626162615f79616761000b636f6e74696e656e74616c78
Statistics: Zero bytes: 20, ASCII characters: 122Kotlinx.serialization (ProtoBuf):
Size: 110 bytes
String representation (all bytes): user_123456789 John Wickjohn.wick@continental.com 7(0ÕÖ0:assassin: legendary: baba_yaga:continental
Hex representation (all bytes): 0a0e757365725f31323334353637383912094a6f686e205769636b1a196a6f686e2e7769636b40636f6e74696e656e74616c2e636f6d2037280130809083d5d6303a08617373617373696e3a096c6567656e646172793a09626162615f796167613a0b636f6e74696e656e74616c
Statistics: Zero bytes: 0, ASCII characters: 94Android Parcelable:
Size: 296 bytes
String representation (all bytes): ??????k??z??.??a??n??d??r??o??i??d??.??b??e??n??c??h??m??a??r??k??.??U??s??e??r??1????????????????u??s??e??r??_??1??2??3??4??5??6??7??8??9?????????? ??????J??o??h??n?? ??W??i??c??k????????????j??o??h??n??.??w??i??c??k??@??c??o??n??t??i??n??e??n??t??a??l??.??c??o??m??????7??????????????È j????????????????a??s??s??a??s??s??i??n?????????? ??????l??e??g??e??n??d??a??r??y?????? ??????b??a??b??a??_??y??a??g??a????????????c??o??n??t??i??n??e??n??t??a??l??????
Hex representation (all bytes): 1a0000006b007a002e0061006e00640072006f00690064002e00620065006e00630068006d00610072006b002e0055007300650072003100000000000e00000075007300650072005f0031003200330034003500360037003800390000000000090000004a006f0068006e0020005700690063006b000000190000006a006f0068006e002e007700690063006b00400063006f006e00740069006e0065006e00740061006c002e0063006f006d000000370000000100000000c8a06a85010000040000000800000061007300730061007300730069006e0000000000090000006c006500670065006e00640061007200790000000900000062006100620061005f00790061006700610000000b00000063006f006e00740069006e0065006e00740061006c000000
Statistics: Zero bytes: 169, ASCII characters: 113Comparative Size Table
| Approach | Size (bytes) | Relative to minimum |
|---|---|---|
| kotlinx.serialization (ProtoBuf) | 110 | Baseline (100%) |
| Java Externalizable | 166 | +51% |
| Android Parcelable | 296 | +169% |
| Java Serializable | 388 | +253% |
Results might seem unexpected. Parcelable, which we positioned as optimized solution for Android, takes almost three times more space than kotlinx.serialization and almost twice as much as Java mechanisms. Why?
The answer lies in string encoding. Parcel uses UTF-16 for storing all string data. Look carefully at Parcelable hex representation: between each character you see zero bytes (00). This is characteristic feature of UTF-16, where each ASCII character takes not one but two bytes. Our User object contains five string fields: id, name, email, and four elements in tags list. In UTF-16, string “John Wick” (9 characters) takes 18 bytes, “john.wick@continental.com” (25 characters) takes 50 bytes, and so on.
Meanwhile, Externalizable and kotlinx.serialization use more compact formats. ObjectOutputStream.writeUTF() uses modified UTF-8 version, where ASCII characters take one byte. Protocol Buffers uses its own efficient encoding with variable-length encoding for numbers and optimized string representation.
But let’s remember context: Parcelable was created not for minimizing data size during long-term storage, but for maximally fast transmission between Android processes through Binder. UTF-16 was chosen for good reason. This is native encoding for Java/Kotlin strings in memory. When Parcel writes string to UTF-16, it essentially copies String internal representation directly to buffer without re-encoding. This is fast. When reading, deserialization also doesn’t require conversion. Bytes simply turn back into String. No overhead for re-encoding from UTF-8 to UTF-16 and back, as happens with other mechanisms.
Moreover, in IPC context through Binder, the difference of 100-200 bytes is insignificant. Data lives in memory for microseconds, transmission happens inside device through kernel with minimal copying. Here serialization/deserialization speed is more important, not buffer size. If you needed to send data over network or save to file, choice would be different. But for IPC between Activity, Service and other Android components, the compromise is justified: more memory, but faster processing.
Performance Results
Testing conducted on Nothing Phone (2a) device (model A065) with Dimensity 7200 Pro processor (8 cores, up to 2.99 GHz), 12 GB RAM, Android 15 (API 35). All tests performed using Jetpack Benchmark library in speed compilation mode with automatic JIT compiler warmup and statistical result processing.
We measure full cycle: object serialization to bytes + immediate deserialization back to object. This is realistic scenario for IPC where data is immediately read on the other side.
Results Table (median values):
| Approach | Time (ns) | Time (μs) | Relative to fastest | Allocations |
|---|---|---|---|---|
| Android Parcelable | 2,824 | 2.82 | Baseline (1.0×) | 12 |
| kotlinx.serialization (ProtoBuf) | 4,707 | 4.71 | 1.67× slower | 41 |
| Java Externalizable | 9,531 | 9.53 | 3.38× slower | 83 |
| Java Serializable | 30,985 | 30.99 | 10.97× slower | 201 |
Detailed Statistics:
| Approach | Min (ns) | Max (ns) | Median (ns) | CV* | Iterations |
|---|---|---|---|---|---|
| Android Parcelable | 2,799 | 2,867 | 2,824 | 0.56% | 42,680 |
| kotlinx.serialization | 4,398 | 4,882 | 4,707 | 2.47% | 16,697 |
| Java Externalizable | 9,252 | 10,130 | 9,531 | 1.88% | 11,228 |
| Java Serializable | 30,356 | 32,243 | 30,985 | 1.67% | 2,887 |
*CV (Coefficient of Variation) - shows result stability. Lower is more stable.
Results Analysis
Benchmark results give clear picture of each approach’s trade-offs.
Android Parcelable - undisputed speed leader. Median time for full cycle is only 2.82 microseconds. This is 1.67 times faster than kotlinx.serialization, 3.4 times faster than Externalizable and almost 11 times faster than Serializable. Coefficient of variation of only 0.56% speaks of exceptional result stability. We’ve covered reasons for such performance: direct memory work through JNI, absence of abstraction layers, use of native UTF-16 without re-encoding, Binder integration at kernel level, object pooling for minimizing allocations. Only 12 allocations per operation. This is Parcel.obtain() from pool and minimal service objects. Yes, object takes 296 bytes due to UTF-16, but in IPC context this is price worth paying for such speed.
kotlinx.serialization (ProtoBuf) - balance of speed and size. Median 4.71 microseconds, only 1.67 times slower than Parcelable, but data size almost three times smaller (110 bytes vs 296). This is impressive result for solution that works cross-platform. Code generation through compiler plugin gives direct field access without reflection, Protocol Buffers uses efficient binary encoding with variable-length encoding. 41 allocations. This is creating ByteArray, internal encoder and decoder buffers. Coefficient of variation 2.47% bit higher than Parcelable, but still acceptable. If this was JSON instead of ProtoBuf, results would be worse due to text parsing and larger data size.
Java Externalizable - manual labor without obvious advantages. Median 9.53 microseconds, 3.4 times slower than Parcelable and 2 times slower than kotlinx.serialization. At same time, data size (166 bytes) is larger than ProtoBuf (110 bytes), though smaller than Parcelable (296 bytes). Why? We manually write data through ObjectOutput, but this still goes through ObjectOutputStream with its buffering and protocol. ByteArrayOutputStream is created, then ObjectOutputStream wraps it, protocol service markers are written (TC_OBJECT, TC_BLOCKDATA), then our data, then ByteArrayInputStream and ObjectInputStream for reading. 83 allocations. This is all those intermediate stream and buffer objects. We got control over field order and possibility of custom logic, but didn’t get real performance. In modern world with kotlinx.serialization and Parcelable, Externalizable has few application scenarios left.
Java Serializable - worst in all parameters. Median 30.99 microseconds, almost 11 times slower than Parcelable. This isn’t just slow, this is catastrophically slow for mobile device. Data size 388 bytes, largest of all. 201 allocations, 16 times more than Parcelable. We know reasons: reflection through ObjectStreamClass.lookup(), field traversal through Field.get(), creating descriptors for each class in hierarchy, writing full class names and field types, creating many temporary objects. Coefficient of variation 1.67% says results are stable, but it’s poor consolation when you’re stably slow. The only advantage of Serializable - simplicity of adding to class (just : Serializable), but price of this simplicity is too high for production code.
Important observation about iteration count. Notice “Iterations” column in table. Parcelable performed 42,680 iterations in allocated time, kotlinx.serialization performed 16,697, Externalizable performed 11,228, and Serializable only 2,887. These aren’t random numbers. Benchmark library performs as many iterations as it manages in fixed time considering statistical significance. Slower the operation, fewer iterations manage to execute. The 14.8 times difference between Parcelable (42,680) and Serializable (2,887) clearly illustrates performance gap.
Conclusion: So Spoke the Benchmarks
We’ve traveled from the very origins of serialization in Java to modern cross-platform solutions. Started with Serializable, which appeared in JDK 1.1 and is still used despite obvious performance and security problems. Covered Externalizable, which gives control but doesn’t solve fundamental Java serialization problems. Studied Parcelable, created by Google specifically for Android IPC, where every microsecond and every allocation matters. And finished with modern kotlinx.serialization, which works everywhere Kotlin works, from Android to iOS, from JVM to Native.
Numbers from benchmarks speak for themselves. Parcelable is 11 times faster than Serializable and creates 16 times fewer objects. kotlinx.serialization generates data 3.5 times more compact than Serializable with speed comparable to Parcelable. But main thing isn’t absolute numbers. Main thing is understanding why these numbers are exactly this way. Reflection versus code generation. Universality versus specialization. Ease of use versus control.
There’s no one right answer to question “which serialization to use”. There’s context, requirements, limitations. For Android IPC choice is obvious. For network APIs it’s different. For cross-platform projects it’s third. But now that you know how each approach works under the hood, you can make conscious choice, not repeat others’ statements from article headlines.
Thank you for reading to the end. Hope this article gave you not just comparative table, but deep understanding of serialization evolution in JVM and Kotlin ecosystem. Now in interview, when asked “Why is Parcelable faster than Serializable?”, you can explain about JNI, UTF-16, object pooling and Binder, not just say “because documentation says so”.
Discussion
No comments yet. Be the first to share your thoughts!