<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Ladna's Blog]]></title><description><![CDATA[Software Engineer and random ponderer]]></description><link>https://ladmerc.com</link><generator>RSS for Node</generator><lastBuildDate>Sun, 12 Apr 2026 11:57:23 GMT</lastBuildDate><atom:link href="https://ladmerc.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Perfect Deception: Why Even Perfect Technology May Never Show Us the Real World]]></title><description><![CDATA[Phase 0: The Prologue (The Blink Test)

Imagine a life-sized 8K screen showing a perfectly recorded human standing next to the real person.
Now close your eyes for half a second.
While your eyes are c]]></description><link>https://ladmerc.com/the-perfect-deception</link><guid isPermaLink="true">https://ladmerc.com/the-perfect-deception</guid><category><![CDATA[Philosophy]]></category><category><![CDATA[technology]]></category><category><![CDATA[future tech]]></category><category><![CDATA[camera]]></category><dc:creator><![CDATA[Ladna Meke]]></dc:creator><pubDate>Sat, 14 Mar 2026 06:14:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/62d316c4e52a5d129a0c140e/9d532e2c-a581-4ccf-b8a4-82f9684a8d8c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Phase 0: The Prologue (The Blink Test)</h3>
<blockquote>
<p>Imagine a life-sized 8K screen showing a perfectly recorded human standing next to the real person.</p>
<p>Now close your eyes for half a second.</p>
<p>While your eyes are closed, the screen and the real person switch places.</p>
<p>You open your eyes.</p>
<p>You instantly know which one is fake.</p>
</blockquote>
<p>Why?</p>
<h3>Phase 1: The Mundane Awe</h3>
<p>Every now and then, it is easy to catch yourself marveling at the most mundane things. You might be walking past a towering high-rise condo, your neck craned back, trying to comprehend the sheer logistics of its construction. You might run your hand along a wooden table and notice the precise, interlocking geometry of the masonry and joinery, wondering which ancient human first figured out how to make two pieces of wood hold together without nails. Or maybe you just flick a cheap, gas-station cigarette lighter and stare at the flame, hit with a sudden, overwhelming wave of awe at the mechanics of it all.</p>
<img src="https://cdn.hashnode.com/uploads/covers/62d316c4e52a5d129a0c140e/76184446-1416-4f47-ad26-74785e4f7b9a.png" alt="" style="display:block;margin:0 auto" />

<p>At certain random times, the sheer weight of human progress hits you. You look at the things we’ve built, the physical and digital monuments of our species, and the natural question bubbles up: <em>How could it possibly get better than this? Have we finally hit the apex?</em></p>
<p>Recently, my own wandering thoughts landed on the devices in our living rooms. We surround ourselves with visual and auditory marvels, taking them completely for granted. We walk around every day assuming that our technology is simply reality, captured perfectly.</p>
<h3>Phase 2: The Benchmark of the Past</h3>
<p>To understand how we got here, we have to look back at the earliest versions of "fake reality."</p>
<p>Think about humans centuries ago looking at a masterful oil painting or watching a stage play. They were immersed, sure, but they knew exactly what it was: an imitation. A portrait of a king was obviously not the breathing king.</p>
<p>Now, fast forward to a family sitting in a cinema in the 1940s. They are watching a film shot on the greatest, most state-of-the-art cinematic camera of their era. To them, the moving image was a technological marvel, an absolute triumph of human engineering. But did they think it was the "apex" of visual technology? Did they feel they were looking at reality itself?</p>
<p>Of course not. And more importantly, if anyone in 1940 <em>did</em> look at a television and think, "Yes, this is as good as it gets," they had absolutely no right to feel that way.</p>
<img src="https://cdn.hashnode.com/uploads/covers/62d316c4e52a5d129a0c140e/7d70d8f0-622d-4082-b8d5-6cac266f4225.png" alt="" style="display:block;margin:0 auto" />

<p>They had no justification for arrogance because the gap between the technology and the real world was impossibly wide. They could look at the fuzzy, grainy, black-and-white projection on the wall, look down at their own physical hands, and instantly register the massive chasm between the screen and reality. Their biological eyes were vastly superior to the technology. The real world was the benchmark, and the screen was visibly failing to match it.</p>
<h3>Phase 3: The Trap</h3>
<p>Now, look at the device you are reading this on. Think about standing in front of an 8K OLED TV, or watching a crystal-clear, hyper-realistic video generated entirely by AI.</p>
<p>When we look at modern displays, we fall into a very logical, very comfortable trap. Today, the camera has caught up to the human eye. In order to mirror the real world, camera technology evolved by learning to capture <em>more</em> visual "noise"—every stray shadow, every subtle texture, every chaotic bounce of light is retained to make the video or image look authentically real. In fact, with hyper-focus and dynamic range, sometimes the screen feels even clearer than reality. Because the biological limits of the human retina have been matched by the pixel density of our screens, the gap that existed in the 1940s has seemingly vanished.</p>
<img src="https://cdn.hashnode.com/uploads/covers/62d316c4e52a5d129a0c140e/8b31ad29-cacc-411b-a1ea-5211b79595dc.png" alt="" style="display:block;margin:0 auto" />

<p>This is exactly why we are the <em>only</em> generation in human history with the right to be arrogant. We actually have a physiological excuse to feel like we’ve reached the absolute ceiling of visual technology.</p>
<p>Sure, we aren't completely naive—we know we aren't <em>quite</em> at the final finish line. We know 8K might eventually become 16K, or the contrast ratios might get a bit richer. But we know those leaps are no longer massive, paradigm-shifting jumps; they are just tiny, inevitable, incremental steps. We feel that the true apex of video technology is entirely within reach, just a matter of time. We look at our screens and think: <em>We’ve essentially solved it. We’ve captured the visual world.</em></p>
<h3>Phase 4: The Audio Doubt</h3>
<p>But what about audio? If visual technology is resting comfortably near its apex, surely the way we record and playback sound is right there alongside it, right?</p>
<p>Not quite. Let’s introduce a little contrast. Think about sitting in a room with a top-of-the-line, million-dollar surround sound system. The audio is lossless, perfectly balanced, and engineered by masters. Now, close your eyes and listen to a recording of a person speaking.</p>
<p>Can you tell you are listening to a speaker, and not a physical human being standing in the room with you?</p>
<p>Yes. Immediately. Your ears can perceive the deception instantly. It doesn't quite feel like the real world. And the reason why is deeply ironic: audio feels fake because it <em>sounds</em> <em>too clean</em>.</p>
<img src="https://cdn.hashnode.com/uploads/covers/62d316c4e52a5d129a0c140e/5fe701b5-99c7-4d09-8aea-3418d7056004.png" alt="" style="display:block;margin:0 auto" />

<p>While video chased the chaotic "noise" of the real world to feel authentic, audio technology evolved by systematically scrubbing reality out of the recording. We built soundproof booths, we invented aggressive noise cancellation, and we isolated vocals to make them pristine. Beyond just removing the noise, modern audio is often heavily augmented and processed.</p>
<p>Audio gives itself away. Even these expensive sound systems cannot fully convince your brain that a real human is standing in the room with you. Something about it always feels slightly wrong. Too contained. Too directional. Too detached from the physical space around you.</p>
<p>But the real world isn't pristine or augmented. The real world is visceral and messy. Real sound comes from vocal cords, resonates through a physical chest, bounces off the hardwood floor, reflects off the drywall, and wraps around the unique cartilage of your specific ear before hitting your eardrum. Even the absolute best speaker on earth is just a vibrating cone blasting isolated, sterile sound from a static point in the room. It makes it glaringly obvious that what you are hearing is an imitation.</p>
<p>At this point in the thought experiment, it’s easy to settle into a neat conclusion: Okay, so video technology has successfully mirrored reality, but audio still has a long way to go to catch up.</p>
<h3>Phase 5: The Epiphany</h3>
<p>But this is where I realized I was being completely disingenuous. I was letting my eyes lie to me.</p>
<p>Did you notice what we just did? While we were busy picking apart the obvious flaws of audio, we instinctively gave video a complete free pass. We scrutinized the speaker, but we blindly accepted the TV as "real."</p>
<p>Why did we do that? Why did our brains so easily spot the limitations of audio, but intuitively let the screen off the hook?</p>
<p>The answer is physiological. Humans are incredibly visual creatures; we build our sense of reality primarily through our eyes. Because the modern video <em>looked</em> high-resolution, our visual cortex was satisfied enough by the pixel density to simply ignore the missing physics. We innately trust our eyes, so we gave the screen a free pass. Meanwhile, our ears—which evolved to be highly attuned to the 3D physics of space for predator detection—immediately flagged the flat audio as fake.</p>
<p>But if we hold video to the exact same unforgiving standard as audio, the illusion shatters.</p>
<p>Let’s run the ultimate test. I call it <strong>The Blink Test</strong>.</p>
<p>Imagine placing a massive, life-sized, floor-to-ceiling 8K TV right next to a real, breathing human being. Now, close your eyes for a split second. While your eyes are closed, the giant TV and the person switch places. You open your eyes.</p>
<img src="https://cdn.hashnode.com/uploads/covers/62d316c4e52a5d129a0c140e/329bc6ee-9d17-49bb-bd0a-e3a6ea0a149d.png" alt="" style="display:block;margin:0 auto" />

<p>Could you tell the difference?</p>
<p>Yes. Instantly. Without a shadow of a doubt.</p>
<p><strong>The Blink Test:</strong></p>
<blockquote>
<p>If a perfect recording of reality can be instantly distinguished from reality after a single blink, the medium has not solved reality</p>
</blockquote>
<p>Even if the camera resolution flawlessly matches your human vision, your brain will immediately recognize the massive TV as a fake. You would instantly know that one is an imitation of the real world, and the other is the real world itself. (And if you are wondering about Virtual Reality—no, even waking up from a coma with a VR headset strapped to your face wouldn't fool you for long. VR attempts to fake the 3D space, but your biological eyes still know they are focusing on a flat, artificially lit panel two inches away).</p>
<p>The realization hits hard: video is suffering from the exact same limitation as audio. They share the identical flaw, but our brains were just too blinded by pretty pixels to spot the similarity at first.</p>
<h3>Phase 6: The "Frontend vs. Backend" Realization</h3>
<p>So, why do we fail the Blink Test? If the camera quality is perfect, why is the illusion broken so easily?</p>
<p>To make this coherent, it helps to look at it through the lens of software engineering. Conceptually, many systems have a "Backend" (where data is captured, processed, and stored) and a "Frontend" (how that data is rendered and presented to the user). Now, not every system neatly fits this binary—as any avid engineer knows, a backend like Nginx might actually just be the frontend for another downstream service—but as a philosophical model, dividing <em>capture</em> from <em>rendering</em> perfectly explains our current technological ceiling.</p>
<img src="https://cdn.hashnode.com/uploads/covers/62d316c4e52a5d129a0c140e/dc1a1a21-b442-4eb0-8cc6-d70ab204acff.png" alt="" style="display:block;margin:0 auto" />

<p>When it comes to human perception, we have almost completely maxed out the Backend. Our data capture has nearly exceeded what flat displays can reproduce. Our high-res cameras and studio microphones try to capture reality with near-perfect detail.</p>
<p>The problem is our Frontend. Our display mechanisms are severely bottlenecked by archaic, dead-end technology.</p>
<p>When you open your eyes during the Blink Test, your brain instantly flags the TV as fake because of physical perception, specifically 3D parallax. In the real world, light from a lamp bounces <em>off</em> a person, giving them texture and depth. A TV is just a flat rectangle shooting artificial light directly into your eyes. Furthermore, the world is 3D. If you look at a real person and shift your head even a fraction of an inch, your perspective changes. You see slightly more of their left cheek; the background behind them shifts dynamically. But a TV screen is entirely flat. No matter how you move your head, the geometry remains locked. It is a 2D bottleneck trying to render a 3D universe. It fails The Blink Test.</p>
<p>Audio suffers from the exact same Frontend failure. The microphone (Backend) captures perfect sound, but the speaker (Frontend) ignores physics. It lacks the spatial mapping—the way sound waves interact with the physical geometry of your body and the room.</p>
<p>We haven't reached the apex of reality. We have just nearly maxed out the illusion of the flat screen and the static speaker.</p>
<h3>Phase 7: The True Evolution</h3>
<p>This brings us full circle back to our own arrogance.</p>
<p>We started by looking at the audiences of the 1940s, concluding they had no right to believe they were at the apex of technology. But with this realization, we are humbled. We have absolutely no right to be arrogant either. We haven't reached the finish line of video technology, and the apex isn't just a few minor upgrades away. We are simply at the absolute limit of what 2D screens and static speakers can do.</p>
<p>The true apex of technology isn't about capturing better data; it is about synthesizing perception. The next era will focus on solving the physics of visual and auditory rendering, bypassing our archaic displays to perfectly hack human biology. It means moving away from glowing rectangles and moving toward true light-field displays or holograms that shoot photons at the exact depth of real life, dynamically solving the parallax problem as you move your head. It means moving away from static cones pushing air, and moving toward spatial audio systems that map sound waves in real-time to your specific physical anatomy, tricking your brain into hearing a whisper right over your shoulder.</p>
<img src="https://cdn.hashnode.com/uploads/covers/62d316c4e52a5d129a0c140e/39051074-9854-4e4f-b060-fce0e7d89eb8.png" alt="" style="display:block;margin:0 auto" />

<p>To achieve this—to perfectly erase the medium so that the artificial is entirely indistinguishable from the real—we are still generations away.</p>
<p>So, we are not at the end of audio and video evolution. We are merely at the end of the era of the screen. The true apex won't be a better, thinner rectangle mounted in your living room. The true apex will be the complete and total erasure of the medium itself—when the technology becomes so perfectly intertwined with our physical senses that the line between the artificial and the real completely disappears.</p>
<h3>Phase 8: The Epilogue (The Final Deception)</h3>
<p>At the end of the day, our technology has simply gotten incredibly good at tricking us. We are close to maxing out the cameras and the microphones, but we are still serving that perfect data through flat glass and vibrating cones. Our brains are simply too smart, and our biology too finely tuned to the physical world, to be fooled by a 2D render of a 3D universe forever.</p>
<p>However, there's a terrifying loophole.</p>
<p>Throughout this entire thought experiment, we used the physical real world—and our biological human eyes—as the ultimate, unforgiving benchmark. We assumed that if technology could perfectly mimic what our eyes see, it had perfectly captured reality.</p>
<p>But what if our eyes are lying to us?</p>
<p>If you look at a flower, you see soft yellow petals. If a bee looks at the exact same physical flower, it sees a glowing, high-contrast landing pad of ultraviolet electromagnetic signatures. An eagle sees the world through an entirely different focal and spectrum paradigm. They are all pulling from the exact same physical Backend—the raw, objective data of atoms, photons, and electromagnetic waves. But that identical data is being intercepted and rendered completely differently by each species' unique biological Frontend.</p>
<p>So, what is the actual "visual truth"? There isn't one. The human eye isn't the objective ground truth of the universe; it is just one specific, highly filtered rendering engine.</p>
<p>It is a Frontend designed to protect us by systematically dropping massive amounts of data. Right now, as you read this, you are submerged in an ocean of invisible information. Wi-Fi signals are passing through your chest. Bluetooth handshakes are bouncing off your walls. Infrared heat, radio waves, and cosmic radiation are ever-present. They exist in the physical world just as tangibly as a wooden table, but our biology actively blinds us to them to prevent sensory overload.</p>
<p>Which brings us to the final, terrifying paradox of the technological apex.</p>
<p>If the true evolution of video is to move beyond the archaic screen—if it eventually merges perfectly with our physical senses—it won't stop at just mimicking the visible world. The ultimate apex of visual technology will let us perceive parts of reality our biology cannot detect. It will render the invisible. It will let us see the Wi-Fi.</p>
<p>But herein lies the final act of deception: <em>Wi-Fi doesn’t have a color.</em> For our brains to comprehend this invisible reality, the technology will have to invent a visual language for it. It will have to arbitrarily paint Wi-Fi as a shimmering gold mist, or Bluetooth as a pulsing blue geometric web.</p>
<p>And because our biological eyes can never actually see these forces naturally, we will have absolutely no baseline to compare them to. The "Blink Test" becomes entirely impossible. We will have no choice but to blindly believe the rendering we are given.</p>
<p>Indeed, this is where our arrogance comes full circle.</p>
<p>If the medium reaches a stage where it flawlessly tricks us into accepting things our eyes cannot physically see, we are right back to exactly where we started: we simply cannot fathom what the true "apex" will be. In that future, technological advancement won't be about capturing reality—it will be an arms race of deception. It will be about optimizing how best to represent the invisible, iterating on those representations until our brains have no choice but to accept them as the fundamental truth of the universe.</p>
<p>In that final state, the illusion will be so absolute, and the provided reality so overwhelmingly rich, that the ultimate paradigm shift will occur. We won't just accept the deception. We will look back at our own biological eyes—the very organs we once arrogantly used as the ultimate benchmark of reality—and realize they were just primitive, obsolete hardware all along.</p>
<p>The final irony may be this: we spent centuries trying to make technology match human perception, only to eventually discover that human perception was the lowest-fidelity rendering engine all along.</p>
<p><em>If you want to pull this thread even further, this exact realization is why I explored whether</em> <a href="https://ladmerc.com/have-humans-evolved-more-in-the-last-century-than-in-all-of-history-combined"><em>humans have evolved more in the last 100 years than all of humanity combined</em></a><em>.</em></p>
]]></content:encoded></item><item><title><![CDATA[Have humans evolved more in the last century than in all of history combined?]]></title><description><![CDATA[Ever get the feeling that the world is changing faster than we can keep up? That the phone in your pocket has more power than the computers that sent humanity to the Moon, and it’ll be obsolete in two years?
It’s not just a feeling. It’s a fact.
Some...]]></description><link>https://ladmerc.com/have-humans-evolved-more-in-the-last-century-than-in-all-of-history-combined</link><guid isPermaLink="true">https://ladmerc.com/have-humans-evolved-more-in-the-last-century-than-in-all-of-history-combined</guid><category><![CDATA[technology]]></category><category><![CDATA[humanity]]></category><category><![CDATA[Science and Technology]]></category><category><![CDATA[evolution]]></category><category><![CDATA[history]]></category><category><![CDATA[transhumanism]]></category><category><![CDATA[Artificial Intelligence]]></category><dc:creator><![CDATA[Ladna Meke]]></dc:creator><pubDate>Sat, 23 Aug 2025 23:46:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755987635520/e6d64a5b-b3f7-42b0-af50-ef12e3e429be.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ever get the feeling that the world is changing faster than we can keep up? That the phone in your pocket has more power than the computers that sent humanity to the Moon, and it’ll be obsolete in two years?</p>
<p>It’s not just a feeling. It’s a fact.</p>
<p>Some days ago, while rewatching Jurassic World, I fell down a rabbit hole, asking a simple question: <strong>Have we, as a species, evolved more in the last 100 years than in all of our previous history combined?</strong> From a purely scientific, biological standpoint, the answer is a clear "NO." Human evolution, the kind driven by DNA and natural selection, moves at a glacial pace. We are, biologically, almost identical to the humans who lived a thousand years ago.</p>
<p>But that answer feels profoundly wrong, doesn't it? Because in every other way, we are an entirely different species. This is the crux of it: our sociocultural and technological evolution has uncoupled from our biology, and it’s hitting warp speed. To really understand the sheer vertigo of our current moment, we have to appreciate the sheer scale of the journey that brought us here.</p>
<p>Let's put it in perspective.</p>
<blockquote>
<p>Imagine the entire story of our species, <em>Homo sapiens</em>, as a single day. If we appeared just after midnight, we spent the first 23 hours and 30 minutes of that day as hunter-gatherers. The Roman Empire rose and fell in the last 10 minutes. The entire Industrial Revolution took place in the last 60 seconds. And the device you're using to read this? It was born in the final tick of the clock.</p>
</blockquote>
<p>This is our story: a species defined by long periods of slow change, followed by an explosion of progress so violent and so rapid that we haven't yet begun to comprehend it.</p>
<hr />
<h3 id="heading-part-1-the-long-dawn-of-consciousness">Part 1: The Long Dawn of Consciousness</h3>
<p>For hundreds of thousands of years, we were just another clever primate. Our evolution was happening, but you'd need a geologist's sense of time to notice it. Yet, within this immense quiet, a series of fundamental "unlocks" occurred, creating a new kind of being on planet Earth.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755986303108/5ef841f5-9827-4afd-8e5d-191147c3bda2.png" alt="A cinematic, realistic illustration of early humans huddled around the first controlled fire, showing a mixture of awe, fear, and dawning intelligence." class="image--center mx-auto" /></p>
<h4 id="heading-the-foundational-sparks-prehistory"><strong>The Foundational Sparks (Prehistory)</strong></h4>
<ul>
<li><p><strong>The Control of Fire (~1 million years ago):</strong> This was arguably our first great divergence from the animal kingdom. Fire wasn't just a tool; it was a captured piece of the sun. It provided warmth, protection, and cooked food, which in turn fueled the growth of our most precious and energy-hungry organ: the brain.</p>
</li>
<li><p><strong>Complex Tools (~500,000 years ago):</strong> A sharpened rock is one thing. A hafted spear is another. The creation of composite tools showed a new level of planning and abstract thought. We were becoming the planet's apex predator through ingenuity, not just brawn.</p>
</li>
<li><p><strong>The Cognitive Revolution (~70,000 years ago):</strong> This is the moment <em>we</em> truly arrived. Something shifted in our minds, giving birth to symbolic language, myth, and a profound new toolbox for survival and meaning.</p>
<ul>
<li><p><strong>Art and Music:</strong> We began to create for reasons beyond pure survival. The first bone flutes and cave paintings show a mind that could not only see the world but also interpret it, find beauty in it, and share that experience with others.</p>
</li>
<li><p><strong>Religion and Spirituality:</strong> Whether a discovery about reality or an invention to explain it, the emergence of spirituality was revolutionary "software." It answered unanswerable questions, provided comfort, and created a framework of shared myths and moral codes that allowed us to cooperate in massive numbers, bound together by trust in something bigger than ourselves.</p>
</li>
</ul>
</li>
</ul>
<h4 id="heading-the-slow-grind-of-civilization"><strong>The Slow Grind of Civilization</strong></h4>
<p>After we learned to think and dream together, we began to reshape the world. The Agricultural Revolution (~10,000 BCE) led to villages and cities. And in these new societies, we began experimenting with our greatest invention: our own governance. In Ancient Greece, we saw the birth of democracy - a radical idea that power should not be seized by the strongest, but shared among citizens.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755986584066/8c697b8d-a91e-44e7-b400-19a6ec71532e.png" alt="A digital painting in the neoclassical style, capturing the ideals of shared governance, reason, and civic duty in ancient Athens." class="image--center mx-auto" /></p>
<p>For the next several thousand years, progress continued its steady, deliberate march, with each century adding new tools to our collective toolkit.</p>
<ul>
<li><p><strong>11th-12th Centuries:</strong> A reawakening began. The first universities in Europe (Bologna, Oxford) became engines of inquiry. The magnetic compass arrived, opening the oceans to more reliable navigation. Towering Gothic cathedrals showcased new leaps in engineering and art.</p>
</li>
<li><p><strong>13th-14th Centuries:</strong> We began to codify our societies. The Magna Carta (1215) established the principle that even a king was not above the law. In China, the invention of gunpowder led to the first firearms, a grim turning point in the mechanization of conflict. The Black Death, a horrific catastrophe, also reset society, helping to end feudalism.</p>
</li>
<li><p><strong>15th-16th Centuries:</strong> The world cracked open. The Gutenberg Printing Press (~1440) created an "internet of paper," accelerating the spread of ideas. The Age of Discovery connected the hemispheres. And the Copernican Revolution displaced Earth from the center of the universe, a humbling and crucial step toward scientific maturity.</p>
</li>
<li><p><strong>17th-18th Centuries:</strong> A revolution of the mind took hold. The Scientific Revolution, led by figures like Newton, gave us a method for understanding reality. The Enlightenment championed reason and individual rights, leading directly to the birth of modern democracy in the American and French Revolutions. The First Industrial Revolution began, and the smallpox vaccine marked our first major victory against a plague.</p>
</li>
<li><p><strong>19th Century:</strong> The pace quickened dramatically. We harnessed electricity, transforming night into day. Darwin's Theory of Evolution reshaped our understanding of ourselves. Germ Theory revolutionized medicine. And, crucially, our moral software got a major upgrade with the global movement for the abolition of slavery.</p>
</li>
</ul>
<hr />
<h3 id="heading-part-2-the-great-acceleration">Part 2: The Great Acceleration</h3>
<p>And then, the 20th century dawned, and the steady march became a frantic, exponential sprint. The last 100 years have seen more change than the previous 10,000.</p>
<p>It began with a cascade of power: the first flight, Einstein's theory of relativity, and the discovery of penicillin. But it was the middle of the century that brought our journey to its most critical pivot point.</p>
<p>In 1945, the Manhattan Project yielded the atomic bomb. This was a new kind of innovation. For the first time, we had created a direct, controllable means for our own annihilation. The quiet quest for knowledge had produced a roar loud enough to end all conversations.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755986629205/097d194a-860a-477a-aaf9-14a89b37c1f0.png" alt="A powerful and iconic black-and-white photograph recreating the historic Trinity Test, conveying a sense of terrifying awe." class="image--center mx-auto" /></p>
<p>Just three years later, in 1948, the world responded with another kind of innovation: the Universal Declaration of Human Rights. This groundbreaking document, for the first time on a global scale, declared that all humans are born free and equal. Crucially, it unequivocally condemned practices like slavery and servitude - which had been fixtures of human civilization for millennia - formally ending a brutal chapter of our history. This juxtaposition is the central story of our time: our capacity for self-destruction grew to be infinite, and in response, we reached for a new, global definition of our shared dignity.</p>
<p>The second half of the century saw the birth of a new substrate for civilization: the digital world.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755986727585/196efcf0-00d5-484d-bf51-2539bc982283.png" alt="A meme format showing the stark and humorous contrast between the first clunky transistor and a modern, intricate CPU." class="image--center mx-auto" /></p>
<ul>
<li><p>The transistor (1947) was the humble, solid-state spark that started it all. It was the neuron of the coming global brain.</p>
</li>
<li><p>ARPANET (1969) became the nervous system, connecting those neurons together.</p>
</li>
<li><p>The Personal Computer (1980s) and the World Wide Web (1990s) distributed this new power to individuals, triggering a Cambrian explosion of information and connectivity.</p>
</li>
</ul>
<p>Alongside this, our moral evolution continued with the Civil Rights Movement and second-wave feminism, showing that our societal software was also capable of rapid upgrades.</p>
<p>Now, in 2025, we stand at the dawn of the next great leap. Artificial Intelligence, a field that has been developing since the 1950s, is finally having its breakout moment. AI isn't just another tool; it's a new kind of mind, an amplifier and accelerator for every other technology we have. It is the force that is tipping us out of the Great Acceleration and into something else entirely.</p>
<hr />
<h3 id="heading-part-3-the-coming-transformation">Part 3: The Coming Transformation</h3>
<p>If the last century took us from horse-drawn carts to AI-piloted drones, where does the same exponential curve take us next? We are moving from merely using technology to the verge of becoming it. The next frontier for our evolution isn't in the savanna or the workshop; it's within our own cells and our own minds.</p>
<h4 id="heading-the-great-filter-humanitys-final-exam"><strong>The Great Filter: Humanity's Final Exam</strong></h4>
<p>Before we speculate about dazzling futures, we must confront the most terrifying possibility: that there is no "next step." The universe is vast and ancient, yet we see no evidence of other intelligent, star-faring civilizations. One chilling explanation for this is "<a target="_blank" href="https://en.wikipedia.org/wiki/Great_Filter"><strong>The Great Filter</strong></a>" - the idea that at some point, developing civilizations invariably encounter a technological or societal challenge so great that it leads to their own extinction.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755990102570/651b8795-34d7-4717-9a78-96fdd3dc5da5.jpeg" alt="post-apocalyptic street showing town-down buildings and rubbles" class="image--center mx-auto" /></p>
<p>For humanity, that filter is no longer a distant threat; it is our present reality. While the dangers will likely reach their most volatile and unpredictable peak <strong>within the next 50 years</strong>, they represent a new, permanent shadow that will follow our species for as many centuries as we survive. They are the final exam, and it is a continuous one. These risks include:</p>
<ul>
<li><p><strong>Nuclear Annihilation:</strong> The original sword over our heads remains. A global conflict, a miscalculation, or a moment of madness is all that separates our complex civilization from a radioactive wasteland.</p>
</li>
<li><p><strong>Biologically Engineered Pandemics:</strong> We have cracked the code of life, meaning a sufficiently advanced actor could theoretically design a pathogen with maximum contagiousness and lethality, creating a silent, replicating threat that could outpace any response.</p>
</li>
<li><p><strong>Unleashed Artificial Intelligence:</strong> The risk is not a Hollywood-style "robot uprising," but something more subtle: creating an AGI whose goals are simply misaligned with our own. We risk not being wiped out in hatred, but as a footnote in a calculation we ourselves initiated.</p>
</li>
</ul>
<p>It's a heavy thought, and it's meant to be. This isn't science fiction sensationalism; it is the central, sobering challenge of our time.</p>
<p>But humanity has never been a species to stare into the abyss and give up. The sheer scale of these risks is precisely what is catalyzing the most radical innovations. So, with this great and terrible filter as our constant backdrop, let us shift our minds. Let's explore the possible paths that open up <em>if</em> we don't annihilate ourselves. What does humanity become if we pass the test? The following scenarios are not just flights of fancy; they are potential outcomes in the high-stakes race against our own extinction.</p>
<h4 id="heading-the-next-50-years-the-age-of-augmentation"><strong>The Next 50 Years: The Age of Augmentation</strong></h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755986825917/7a799173-c07a-41f2-9057-f32135bcb583.png" alt="A piece of high-quality, futuristic concept art showing an extreme close-up of a sleek, intriguing, and slightly unnerving cybernetic eye." class="image--center mx-auto" /></p>
<p>With the existential risks firmly in mind, the next five decades will be a race to use our own technology to solve the problems technology has created.</p>
<p>Humans are saddled with a biology that evolved for survival on the African savanna, which creates profound limitations in the world we've built:</p>
<ul>
<li><p><strong>We Degrade and Die:</strong> Our bodies are programmed for obsolescence. We are susceptible to cancer, neurodegenerative diseases, and the slow decay of aging. Our craving for sugar and fat leads to obesity and diabetes.</p>
</li>
<li><p><strong>Our Minds are Limited:</strong> We forget things, we are prone to cognitive biases, and we can only process a fraction of the information we generate. Our brains, designed to remember which tree has the best fruit, are now overwhelmed with data and struggle to comprehend long-term, abstract threats like climate change. Our tribal instincts are weaponized into nationalism and online echo chambers.</p>
</li>
<li><p><strong>We are Physically Fragile:</strong> We are bound to a very specific set of environmental conditions (gravity, oxygen levels, temperature) and are incredibly vulnerable to radiation, vacuum, and physical trauma.</p>
</li>
</ul>
<p>Our once-primitive brains are now in control of planet-altering technologies. Our emotional, short-sighted, and often irrational impulses hold the keys to the nuclear codes, the gene-editing labs, and the AI data centers. This isn't just a mismatch; it's a danger zone. Our finger is on a button our brain was never designed to comprehend, and we are building bigger buttons every day.</p>
<blockquote>
<p><strong>We are a species running stone-age software on space-age hardware.</strong></p>
</blockquote>
<p>Our biology is holding us back - and for the first time in four billion years of life on Earth, one species is realizing it doesn’t have to put up with that anymore. <strong>If biological evolution is too slow, why not use our technological evolution to speed it up?</strong> Why not upgrade the hardware of the human body to match the ambition of the human mind?</p>
<p>The line between human and machine will blur as Brain-Computer Interfaces (BCIs), initially for restoring sight or movement, become elective upgrades. Personalized medicine will shift to "biological programming," with AI-designed nanobots maintaining our health from within. The physical and digital worlds will merge into a single, augmented reality we live in, not just look at through a screen.</p>
<p>This entire field of thought, this deliberate drive to use technology to guide our own evolution, has a name: <strong>transhumanism</strong>. It is the ultimate expression of our journey - the point where the tool-maker turns the tools upon himself. Could this be seen as a luxury or as a potential survival strategy? The question for you is whether this is the ultimate act of hubris or the necessary next step in our survival.</p>
<p>It's a terrifying ledger of possibilities, but it also perfectly frames the central challenge of our era. It forces us to ask the ultimate question: Are baseline humans, with our Stone Age brains full of cognitive biases and tribal instincts, even capable of safely steering this ship?</p>
<p>This is the desperate gamble that will define the 21st century. The same technologies that pose these threats: AI, biotechnology, global connectivity - are also the very tools we might use to transcend the flaws that make us so dangerous to ourselves. This is the core argument for transhumanism. Can we use our accelerating power to upgrade our own wisdom? Can we augment ourselves to become less impulsive, more empathetic, and more capable of long-term thinking before our primitive instincts push the self-destruct button?</p>
<p>This choice - to risk extinction by remaining as we are, or to risk our definition of humanity by becoming something more - is the project for the next 50 years. The answer will determine everything that follows.</p>
<h4 id="heading-the-next-century-redefining-human"><strong>The Next Century: Redefining "Human"</strong></h4>
<p>The transhumanist path could lead to a point where the very definition of our species comes into question. What makes a human? We may achieve a form of indefinite lifespan as medical technology extends life faster than we are aging. Artificial General Intelligence (AGI) will likely exist, and the most successful humans may be those who enter a deep symbiosis with it. We will have permanent, self-sustaining bases on the Moon and Mars, the first true off-world citizens.</p>
<h4 id="heading-the-next-two-centuries-masters-of-matter-and-mind"><strong>The Next Two Centuries: Masters of Matter and Mind</strong></h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755986993566/67ed9f31-c113-459c-a28a-842e2c01f4da.png" alt="A beautiful and abstract digital illustration of a branching evolutionary tree from Homo Sapiens into Digital, Cybernetic, and Genetically Engineered forms." class="image--center mx-auto" /></p>
<p>The concept of digital consciousness, or "mind uploading," may offer a path away from biological fragility. This, combined with advanced genetic engineering, will lead to a post-human speciation event. Humanity could branch into different forms - some purely biological, some cybernetic, some entirely digital.</p>
<h4 id="heading-the-next-three-centuries-engineering-reality"><strong>The Next Three Centuries: Engineering Reality</strong></h4>
<p>With the energy of entire stars at our command via fusion power, we could become a multi-planetary species. Our descendants may learn to manipulate the fabric of spacetime for faster-than-light communication or travel. The very definition of "life" will be rewritten as we design entirely new synthetic organisms from scratch.</p>
<p>This is the trajectory we are on. From the first shared myths around a fire to potentially engineering new universes. The power our ancestors sought to master the natural world is now turning inward. Our Stone Age minds are wielding god-like powers. The upgrade to transhumanism may not be a luxury, but a necessity - a final, desperate attempt to install wisdom that can handle the awesome and terrifying power we've unleashed. We are standing on a knife's edge between becoming something more... or becoming nothing at all.</p>
<hr />
<h3 id="heading-a-fantastical-epilogue-what-if-were-leveling-up-or-down">A Fantastical Epilogue: What if We’re Leveling Up (or Down)?</h3>
<p>So, where does this runaway train of evolution ultimately lead? After pondering the immense stakes, it’s fun to let our minds wander into pure speculation.</p>
<p>All this talk of cybernetic upgrades, genetic rewrites, and post-human speciation reminds me, oddly enough, of the vibrant, chaotic world of a video game like <a target="_blank" href="https://www.leagueoflegends.com/en-us/">League of Legends</a>. It sounds crazy, but peel back the fantasy, and you’ll find a surprisingly fitting metaphor for the very future we're exploring.</p>
<p>Think about it. The gleaming city of Piltover is the ultimate vision of clean, elegant transhumanism, where people replace limbs with powerful Hextech prosthetics. Its underbelly, Zaun, is the darker side of bio-augmentation, full of gritty, desperate chem-tech modifications. Up on Mount Targon, mortals ascend to become god-like beings, a perfect parallel to a biological or spiritual form of evolution.</p>
<p>It’s a world where humanity has fractured along ideological lines of enhancement. Sound familiar?</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755987169383/4b6c240a-5fbe-46fc-a36b-f6500e9873e9.png" alt="A vibrant, creative digital art piece mashing up sci-fi and fantasy, with a cyberpunk character and a celestial warrior looking over a breathtaking futuristic/magical city." class="image--center mx-auto" /></p>
<p>Perhaps, as we become transhuman, our technology will advance to the point where we can truly bend the laws of physics around us. As the old saying goes, "any sufficiently advanced technology is indistinguishable from magic." Maybe one day we’ll be able to manipulate matter at will or draw energy from unseen dimensions - things our ancestors would undoubtedly call magic. In that truly fantastical world, the choice wouldn't just be whether to upgrade your body with tech, but whether to become a cybernetic assassin, a celestial demigod, or a master of the elements. It’s a playful thought, but it highlights the sheer, world-altering gravity of the path we're already on.</p>
<hr />
<p>Thanks for reading my random musings.</p>
]]></content:encoded></item><item><title><![CDATA[The Seven Most Iconic Film & TV Characters: A 2025 Update]]></title><description><![CDATA[This post was originally published on my old blog
Over a decade ago, back in 2014, I wrote a post outlining the seven most iconic characters in media. A lot has changed since then, but the core question remains: What makes a character truly iconic?
F...]]></description><link>https://ladmerc.com/the-seven-most-iconic-film-and-tv-characters-a-2025-update</link><guid isPermaLink="true">https://ladmerc.com/the-seven-most-iconic-film-and-tv-characters-a-2025-update</guid><category><![CDATA[Movies]]></category><category><![CDATA[marvel]]></category><category><![CDATA[DC Comics]]></category><category><![CDATA[film]]></category><dc:creator><![CDATA[Ladna Meke]]></dc:creator><pubDate>Wed, 12 Feb 2025 05:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756046506589/53733fd0-9ff1-4fdf-8264-b845af3342e4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This post was originally published on my <a target="_blank" href="https://ladmerc.wordpress.com/2014/01/03/seven-most-iconic-film-characters/"><strong>old blog</strong></a></p>
<p>Over a decade ago, back in 2014, I wrote a <a target="_blank" href="https://ladmerc.wordpress.com/2014/01/03/seven-most-iconic-film-characters/">post</a> outlining the seven most iconic characters in media. A lot has changed since then, but the core question remains: What makes a character truly iconic?</p>
<p>For me, the criteria I set back then still hold true, with a little refinement. Firstly – Is this character universally recognized, even just from a silhouette? Secondarily, yet importantly – can you see the actor and the first thing that comes to mind <em>is</em> that character? Lastly - how many actors have been known to play this role?</p>
<p>The stronger that link between actor and role, the higher the rank. All of my choices <strong>must</strong> satisfy the first condition, have to satisfy the second, but their final placement is determined by the third.</p>
<p>Without further ado, here is <strong>my</strong> updated list for 2025.</p>
<p><strong>7. Batman (Various)</strong></p>
<p>The hugely intelligent Batman by night and billionaire Bruce Wayne by day. Fueled by the tragedy of his past, he uses fear and an arsenal of incredible gadgets to protect the people of Gotham. The most popular modern portrayal is that by Christian Bale, whose role shot to more prominence in <em>The Dark Knight</em> opposite Heath Ledger's Joker, who died six months before its release. The movie is widely regarded by critics as one of the best superhero films ever made. The fact that the mantle keeps getting passed to new actors proves the character itself is the true icon. By the time you finish reading this article, they’ve probably already cast a new one.</p>
<p><strong>6. James Bond (Various)</strong></p>
<p>The 1953-created James Bond (code name 007) should be lower down this list. However, because the character has also been portrayed by more than one actor due to its frequent reboots, it is fair to place him here. The original and, for many, the definitive Bond is Sean Connery, who played the suave super-spy with a license to kill. He’s the epitome of cool, a gentleman spy who is as comfortable in a high-stakes poker game as he is in a high-speed car chase or getting hot damsels in bed outside while executing MI-6 missions. It's a role Connery almost didn't get, as creator Ian Fleming initially called him "an overgrown stuntman." Now, the "Who will be the next Bond?" debate is the most hotly contested job opening in the world.</p>
<p><strong>5. Jack Sparrow (Johnny Depp)</strong></p>
<p>Forget Captain America or Captain Hook! We all love Captain Jack Sparrow - the flamboyant, morally ambiguous, and frequently tipsy pirate captain of the Black Pearl. He's a master of talking his way out of (and into) trouble, often with a slurred monologue that leaves his enemies bewildered. From his unique looks to his annoying personality and further down to his bad personal hygiene and constantly failing trickery, this Pirates of the Caribbean character is one of the most famous ever! You're never quite sure if he's a genius or just incredibly lucky. Depp famously based the character's eccentric swagger on a combination of The Rolling Stones' guitarist Keith Richards and the cartoon skunk Pepé Le Pew, a risky choice Disney executives were initially terrified of. That risk paid off, launching a franchise that sailed past $4.5 billion at the box office. Now, you’ll still find his iconic run all over your TikTok feed.</p>
<p><strong>4. Wolverine (Hugh Jackman)</strong></p>
<p>The tough-as-nails, cigar-chomping mutant with six unbreakable claws. He’s a lone wolf with a reluctant heart of gold, often finding himself as a gruff father figure to younger mutants. For over two decades, Hugh Jackman <em>is</em> Wolverine. He famously "retired" the character in <em>Logan</em>, only to come back again for another round. It's a role he famously won at the last minute after actor Dougray Scott had to drop out due to scheduling conflicts with <em>Mission: Impossible 2</em>. Jackman’s portrayal has since become the Guinness World Record holder for the "longest career as a live-action Marvel superhero.". At this rate, Hugh Jackman is bound to play this role till he's 90.</p>
<p><strong>3. Tony Stark (Robert Downey Jr.)</strong></p>
<p>It is amazing what can happen in a decade. Tony Stark was nowhere near my Top 50 list a decade ago, but now he’s ranked third. The "genius, billionaire, playboy, philanthropist" who became Iron Man. He started as an arrogant arms dealer but evolved into the ultimate hero who made the final sacrifice. His journey from selfishness to selflessness became the very heart of the entire MCU saga. Robert Downey Jr. didn't just play Tony Stark; he embodied him, launching a $30 billion franchise with his irreplaceable charisma. His "I love you 3000" broke the internet, and fans are still scouring every new Marvel movie for a hint of his multiversal return. Downey’s portrayal was so captivating that Marvel has reportedly offered him $100m (Yes, you read that right - a hundred million US Dollars) to play Dr Doom.</p>
<p><strong>2. Harry Potter (Daniel Radcliffe)</strong></p>
<p>The boy who lived remains one of the most famous characters in the world. J.K. Rowling herself co-signed the casting of Daniel Radcliffe after a long search, saying she felt it was like "being reunited with her long-lost son”. Daniel Radcliffe first portrayed Harry Potter at age 11, literally growing up on screen. We watched him go from a bewildered boy living under the stairs to a courageous young man facing his destiny. A somewhat timid character propelled by his own conscience, his mission was to defeat the most evil wizard known to mankind (and wizardkind). He's so definitive in the role that the biggest question for the new HBO series is, "But who could possibly replace him?"</p>
<p><strong>1. Mr. Bean (Rowan Atkinson)</strong></p>
<p>You didn’t think he would be here, did you? Rowan Atkinson’s character was literally a child in a grown man’s body, with a teddy bear for a best friend. Quite the opposite of what you’d expect from someone with an MSc in Electrical Engineering. Suffice it to say that he created the character himself! Almost always sporting his trademark tweed jacket and skinny red tie, this hilarious buffoon entertained almost all ages of audiences worldwide. He approaches everyday situations with the logic of an alien, finding bafflingly complex solutions to the simplest of problems. Rowan Atkinson's bumbling creation is a global phenomenon built on <strong>just 15 original episodes</strong>, which is hugely impressive, considering the timeless legacy. Mr Bean as a character vastly influenced Rowan Atkinson: for example, in movies such as Love Actually and Johnny English Reborn, he still depicted his comedic physical and visual style. He's a walking, breathing GIF who has been a king of memes since the internet began. Before there were TikToks, there was Mr. Bean.</p>
]]></content:encoded></item><item><title><![CDATA[Stream Symphony: More Than Enough Kafka]]></title><description><![CDATA[Introduction: Navigating the Data Maze
Feel free to skip this section if you want to dive right into the technical bits.
Imagine you want to order your favourite burger from your neighbourhood spot “Akaso Burger” - you visit the website/mobile app, s...]]></description><link>https://ladmerc.com/stream-symphony-more-than-enough-kafka</link><guid isPermaLink="true">https://ladmerc.com/stream-symphony-more-than-enough-kafka</guid><category><![CDATA[kafka-partition]]></category><category><![CDATA[kafka]]></category><category><![CDATA[kafka topic]]></category><category><![CDATA[kafka broker]]></category><category><![CDATA[rabbitmq]]></category><dc:creator><![CDATA[Ladna Meke]]></dc:creator><pubDate>Wed, 03 Jan 2024 06:45:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/LqKhnDzSF-8/upload/d701a10a53c61003a0457d2ae4c49985.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction-navigating-the-data-maze"><strong>Introduction: Navigating the Data Maze</strong></h2>
<p>Feel free to skip this section if you want to dive right into the technical bits.</p>
<p>Imagine you want to order your favourite burger from your neighbourhood spot “Akaso Burger” - you visit the website/mobile app, select your item, and tap the order button. The button is disabled, displaying a spinner to indicate processing, and then re-enables. You get a success notification on the website or the app, quickly followed by an email confirmation 2 seconds later. Shortly after, an email arrives from your bank or PayPal, notifying you of the deducted amount due to your burger purchase. Now let’s imagine it’s almost Christmas, and Akaso Burger realizes that besides you ordering daily, ten thousand other people in your neighbourhood seem to love having their burgers. The company decides to have a 24-hour window where all burgers are 70% off, but a user can only order 5 burgers within this time! You want to take advantage of this, but so do over 10,000 people around you. You repeat the process you’re familiar with - order -&gt; wait for the button to be disabled -&gt; get notification -&gt; get email confirmation, but this time things are a bit slower.</p>
<p>Most of the internet is powered by http: a straightforward request-response protocol. This means that a client initiates a request to a server, and waits some time for a response. In this case, the client is the website or the mobile app that triggers the order request when you click the button. The server is the backend handling the request - it does some operations (e.g. confirming there are enough resources) and then returns a response to the client. This is the standard we’re used to, but what happens when a website has to handle lots and lots of requests? The website can simply continue with the request-response paradigm we know and love, but this means the wait time for a response greatly increases as more people start ordering from Akaso Burger. When the server receives an order, it:</p>
<ol>
<li><p>Validates that the user hasn’t ordered more than 5 burgers in 24 hours</p>
</li>
<li><p>Checks if there is enough resources (ingredients, manpower, time) to handle the order</p>
</li>
<li><p>Charges the linked card or PayPal</p>
</li>
<li><p>Confirms the order on the app and at the same time:</p>
</li>
<li><p>Sends a confirmation email</p>
</li>
</ol>
<p>If we go with the http request-response approach, we would need to do the steps above for each order, going through the flow linearly before arriving at the last step. As more users rush to Akaso Burger’s app, this starts getting noticeably slow because each request has to wait for its turn to be processed. As the number of requests increases, the system's capacity to handle them concurrently becomes a bottleneck. What if we forego our full synchronous approach and rethink our architecture? For example, we can notify the user that the order has been received and is being processed. A few seconds later, we notify the user that the order has been confirmed. We can decide to do step 1 above instantly and give the user immediate feedback while we handle the processing in the background. Since we have now broken out of the typical request-response cycle we know and love, how do we handle this huge stream of data now lying somewhere in the background as close to real-time as possible? This is where tools such as RabbitMQ and Apache Kafka come in.</p>
<h2 id="heading-asynchronous-processing-the-contenders"><strong>Asynchronous Processing: The Contenders</strong></h2>
<p>When it comes to asynchronous processing, RabbitMQ and Kafka emerge as leading contenders. Both serve as robust message broker systems designed for near-real-time, scalable, and high-speed asynchronous processing, but they adopt different approaches. IBM defines a message broker as "a software that enables applications, systems, and services to communicate with each other and exchange information". Let's explore the distinctions before delving into the unique strengths of Kafka.</p>
<h3 id="heading-kafkas-approach"><strong>Kafka's Approach:</strong></h3>
<ul>
<li><p><strong>Durability:</strong> Messages in Kafka persist even after delivery/consumption.</p>
</li>
<li><p><strong>Message Replay:</strong> Kafka can replay messages, allowing the resend of the same message.</p>
</li>
<li><p><strong>Speed and Scalability:</strong> Built for high throughput, Kafka employs sequential I/O, avoiding random-access memory overhead. By not deleting messages, Kafka conserves compute cycles.</p>
</li>
<li><p><strong>Consumer Pull Model:</strong> Unlike pushing messages to consumers, Kafka consumers pull or poll messages from the broker, offering consumer flexibility and efficient resource utilization.</p>
</li>
</ul>
<h3 id="heading-rabbitmqs-features"><strong>RabbitMQ's Features:</strong></h3>
<ul>
<li><p><strong>Priority Queues:</strong> RabbitMQ supports priority queues, allowing some messages to be routed to higher-priority queues for expedited processing.</p>
</li>
<li><p><strong>Acknowledgment System:</strong> Consumers in RabbitMQ acknowledge message receipt, and this information is relayed to the producer, ensuring a reliable message delivery system (at the cost of higher latency).</p>
</li>
<li><p><strong>Ease of Learning:</strong> With a simpler architecture, RabbitMQ is considered more straightforward for users new to asynchronous processing.</p>
</li>
</ul>
<p>In the following sections, we'll dive deeper into Kafka's unique architecture.</p>
<h2 id="heading-components-of-kafka"><strong>Components of Kafka</strong></h2>
<p>In the previous section, we mentioned that the messages (data) are “delivered”. What exactly does this mean? Kafka has several components that enable it to send and receive messages. <strong>Producers</strong> send these messages to the <strong>broker</strong>, which in turn sends (delivers) the messages to one or more <strong>consumers</strong>. Since the broker itself does not know what the message is, the consumer needs to know what message is relevant to itself. This is aided by using <strong>topics</strong> - the producer publishes the message to a specific topic on the broker and the consumer(s) listen for messages only for that topic.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704209079059/b14796d1-a2ec-4bfa-883d-f1eb2dfead84.png" alt="Components of Kafka" class="image--center mx-auto" /></p>
<p>Let’s use a simple metaphor to explain this: Imagine you are a postman delivering packages in a building filled with engineers, mathematicians, and doctors. Each group has a numbered mailbox to receive packages relevant to them (e.g. 1 for Mathematicians, 2 for Engineers, etc). You label each package with the appropriate number and drop it off with the concierge. The concierge uses the number to identify what mailbox to place the package in. Every once in a while, a mathematician, engineer, or doctor walks to their designated mailbox to check if a new package has arrived. In this analogy, you are the producer, the concierge is the broker, the number is the topic, and the people checking the mailboxes are the consumers.</p>
<h3 id="heading-broker"><strong>Broker</strong></h3>
<p>A central server with the Kafka program running, a broker is the cornerstone of the Kafka system. Because Kafka was designed with scalability and a high degree of fault tolerance and availability in mind, most setups do not use just one broker, but rather a combination of brokers working together called a cluster. These brokers can be deployed in different availability zones to minimize the risk of downtime. The broker is responsible for:</p>
<ul>
<li><p><strong>Message Persistence:</strong> Stores and manages messages produced by Kafka producers.</p>
</li>
<li><p><strong>Topic and Partition Management:</strong> Organizes messages into topics, divides topics into partitions, and manages the partition creation, replication, and reassignment. Each partition has one leader and multiple followers. The broker is responsible for leader election and ensuring that the leader is actively serving read and write requests.</p>
</li>
<li><p><strong>Producer Communication:</strong> Acts as the endpoint for Kafka producers to send messages. Producers connect to brokers and publish messages to specific topics. The broker is responsible for receiving and acknowledging these messages.</p>
</li>
<li><p><strong>Consumer Communication:</strong> Consumers connect to brokers to subscribe to topics and receive messages. It also maintains an offset for each consumer to keep track of the last consumed message within a partition, as well as managing and updating the offset (position) for each consumer within a partition. This allows consumers to resume reading from where they left off in case of failures or restarts.</p>
</li>
<li><p><strong>Log Compaction:</strong> Supports log compaction for topics, retaining only the latest value for each key.</p>
</li>
<li><p><strong>Security and Access Control:</strong> Implements security features such as authentication and authorization to control access to topics.</p>
</li>
<li><p><strong>Monitoring and Metrics:</strong> Provides metrics for tracking the health, performance, and resource utilization of the Kafka cluster.</p>
</li>
<li><p><strong>Dynamic Configuration:</strong> Support dynamic configuration changes so administrators to modify configurations without requiring a restart.</p>
</li>
</ul>
<h3 id="heading-producer"><strong>Producer</strong></h3>
<p>Producers are client code interacting with the Kafka broker, responsible for sending messages to specified topics. While multiple producers can be created, reusing a single producer generally offers better performance.</p>
<p>When the producer starts up, it establishes a TCP connection with the broker(s) to get metadata such as topics, partitions, leaders, and clusters. It also opens another TCP connection for message sending when the producer <code>send()</code> function is called. Subsequent <code>send()</code> calls to the same topic reuse the same TCP connection. The default TCP connection between the producer and broker is unencrypted plaintext, eliminating the overhead of the brokers decrypting the messages. Because plaintext is not suitable if any of the servers are public, Kafka allows the producer (and consumer) to select a connection protocol during initialization. It is recommended to use a secure option like SSL (<em>might</em> not be needed when both servers are in a VPC). Periodically, according to the config <a target="_blank" href="http://metadata.max.age.ms"><code>metadata.max.age.ms</code></a>, a refresh of the metadata happens to proactively discover new brokers or partitions. For a deeper dive into producer internals, check <a target="_blank" href="https://blog.developer.adobe.com/exploring-kafka-producers-internals-37411b647d0f">this awesome post</a>.</p>
<p>The exact timing of when a message is sent can vary depending on the configuration and the acknowledgment settings. For the most part, message sending happens asynchronously. When the producer <code>send()</code> function is called, it receives an acknowledgment from the broker even if the message has not been fully committed. When batching is enabled (the default), the producer adds any message to its internal buffer and attempts to send it immediately according to the batch size (default of 16KB). If the linger config is enabled, the producer will wait for the linger milliseconds before sending the batch. This is aimed at increasing throughput but at the expense of increasing latency. For example, a <a target="_blank" href="http://linger.ms"><code>linger.ms</code></a> of 5 means that a 5ms artificial delay is introduced before the producer sends the batch.  This increases the chances of messages being sent in a batch since the producer would “linger” for 5ms to see if more messages arrive and add them to the batch. It should be noted though that if we already have the batch size worth of messages, this setting is ignored and the producer sends the batch immediately. In summary, producers will send out the next batch of messages whenever <a target="_blank" href="http://linger.ms"><code>linger.ms</code></a> or <code>batch.size</code> is met first.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704194249414/4778a1f8-d4de-4b63-af15-a1c265ce2206.png" alt="How/when producers send messages" class="image--center mx-auto" /></p>
<p>We can force the messages to be sent (irrespective of the linger and batch settings) by flushing the producer. This is a blocking operation and effectively makes the producer synchronous because it has to wait for acknowledgement of delivery from the broker.  As a result, flushing should be used sparingly (e.g low throughput environments or in tests).  Another point worth mentioning is that the buffer memory is finite, so if more messages arrive over the <code>buffer.memory</code> limit, the producer will be blocked for a configurable time <a target="_blank" href="http://max.block.ms"><code>max.block.ms</code></a> after which it throws an exception. The buffer memory might be full if the producer is receiving messages faster than it is sending, or if the broker is down for any reason.</p>
<h3 id="heading-topic"><strong>Topic</strong></h3>
<p>A Kafka topic serves as a way to organize messages, akin to organizing files in folders. Producers choose a topic to write to, while consumers select the topic they wish to read from. Topics are created and managed using CLI commands specific to the operating system: <code>kafka-topics.bat</code> for Windows and <a target="_blank" href="http://kafka-topics.sh"><code>kafka-topics.sh</code></a> for Mac and Linux (usually in a startup script).</p>
<p>Topics, including those created dynamically, are stored and replicated on Kafka brokers. Kafka automatically creates certain topics, such as the <code>__consumer_offsets</code> topic, as part of its operation. The ability to dynamically create topics is governed by the <code>auto.create.topics.enable</code> setting on the broker. When enabled, the broker creates a topic and partitions when:</p>
<ul>
<li><p>Producer writes to a topic that isn’t currently created.</p>
</li>
<li><p>Producer fetches metadata for a topic that does not exist</p>
</li>
<li><p>Consumer reads from a topic that does not exist</p>
</li>
</ul>
<p>However, creating topics on the fly can lead to maintenance challenges as a typo can cause unwanted topics to be created. Topics created this way also share the same replication factor, number of partitions, and retention settings.</p>
<p>Kafka uses topics to parallelize processing and scale horizontally by splitting messages into different partitions in different brokers. We will discuss partitions in more detail later on.</p>
<h3 id="heading-consumer"><strong>Consumer</strong></h3>
<p>A Kafka consumer is a client application that reads data from a specific topic, more precisely from a particular partition within that topic. Similar to producers, consumers establish a TCP connection with the broker when started. Unlike traditional systems where messages are pushed to consumers, Kafka's design encourages consumers to poll the broker for new messages at their own pace. The reasons for this pull-based approach are detailed in <a target="_blank" href="https://docs.confluent.io/kafka/design/consumer-design.html#consumer-design">Kafka's consumer design documentation</a>.</p>
<p>Consumers continuously poll the broker for new messages, returning a number of messages defined by <code>max.poll.records</code> alongside the offsets for these messages. Kafka uses offset to keep track of the position of the message the consumer has read. The default behaviour of the consumer is to auto commit offsets <a target="_blank" href="http://auto.commit.interval.ms"><code>enable.auto</code></a><code>.commit</code>, which means that every 5 seconds (or <a target="_blank" href="http://auto.commit.interval.ms"><code>auto.commit.interval.ms</code></a>), the consumer updates the broker with the current offset. This is handled by the client libraries making a request to the broker to update the internal <code>__consumer_offsets</code> topic. Consider disabling auto-commit and using manual commit in production to avoid potential issues with offsets. This is well captured in the “Auto Commit” section of <a target="_blank" href="https://medium.com/@rramiz.rraza/kafka-programming-different-ways-to-commit-offsets-7bcd179b225a">this</a> article which is pasted here for visibility:</p>
<blockquote>
<p>With auto commit enabled, kafka consumer client will always commit the last offset returned by the poll method even if they were not processed. For example, if poll returned messages with offsets 0 to 1000, and the consumer could process only up to 500 of them and crashed after auto commit interval. Next time when it resumes, it will see last commit offset as 1000, and will start from 1001. This way it ended up losing message offsets from 501 till 1000. Hence with auto commit, it is critical to make sure we process all offsets returned by the last poll method before calling it again. Sometimes auto commit could also lead to duplicate processing of messages in case consumer crashes before the next auto commit interval.</p>
</blockquote>
<p>Consumers periodically send heartbeats to the broker to signal their activity and ensure the broker is aware of their current status. This occurs during message polling or when committing offsets, whether automatically or manually.</p>
<p>Imagine a situation where a topic is receiving 20,000+ messages in a topic per second. If we have a single consumer reading from that topic, message consumption would be considerably slow. Kafka was built for rapid real time message handling, so how does this happen? With the help of partitions and consumer groups which will be discussed in the next sections.</p>
<h2 id="heading-beyond-the-basics-digging-deeper"><strong>Beyond the Basics: Digging Deeper</strong></h2>
<h3 id="heading-partitions-and-replication"><strong>Partitions and Replication</strong></h3>
<p>In previous sections, we established that producers write data to a topic by selecting a partition. It is important to think about Kafka’s mental model at the partition point level and not at the topic level. What exactly are partitions and why is this important for Kafka.</p>
<p>Let’s imagine you are participating in a Burger eating competition where you have to eat 1000 burgers. It would  take you days to consume all 1000 burgers, but what if you could have your brother help eat some burgers? In this case, you could place 500 burgers in Box A and give your brother 500 burgers in Box B. Now this means you both can finish the combined 1k burgers in shorter time. What if you include your sister and nephew as well? Now you can have 4 boxes each containing 250 burgers that would be finished much faster because all 4 of you are eating the burgers at the same time, in parallel. <strong>Parallelism</strong> helps you rapidly speed up your consumption. What you have essentially done is to partition your burgers into 4 boxes so you can have 4 consumers finish these burgers much quicker. Let’s take the analogy up a notch - imagine if it is crucial for these 1k burgers to be consumed, and all 4 of you are in the same room eating the burgers. If an emergency happens in the room e.g fire alarms goes off, all four people have to stop eating the burgers. What if the first two people take 250 burgers each in one building, and the other 2 people take 250 burgers into the adjacent building? Now if any incident happens in one building, at least two people will still be eating burgers and the other two can join them later. What you have done now is called **redundancy (**or fault tolerance) - a fault in one building does not stop operations because the other building is available to continue.</p>
<p>This is exactly how Kafka handle partitions! <mark>Partitions act as logical divisions that aid in load balancing, parallel processing, and fault tolerance</mark>. When a topic is created, the number of partitions can be defined, or it falls back to <code>num.partitions</code> on the broker config which defaults to 1. When producers write to a topic, they can select the partition key to write to. If the partition key is not set when writing to the topic, Kafka defaults to using the round-robin partition strategy - that is for N partitions, it cycles from partition 0 to N and loops back to 0. With the partition key set, producers can consistently route messages to the same partition (ideal for maintaining order or grouping related messages). Partitions are distributed across brokers, just as in our analogy above, the boxes can be in different buildings instead of one. This means if a broker hosting one partition fails, other brokers can still serve their partitions. Without partitions, Kafka can still operate with multiple brokers, but partitions enable the distribution and parallel processing of data.</p>
<h4 id="heading-key-concepts">Key Concepts:</h4>
<ul>
<li><p>Load Balancing: Partitions enable load balancing, distributing data across multiple consumers.</p>
</li>
<li><p>Parallel Processing: Parallelism is achieved by allowing multiple consumers to process partitions simultaneously.</p>
</li>
<li><p>Fault Tolerance: Redundancy ensures continued operation even if a broker hosting a partition fails.</p>
</li>
</ul>
<p>To understand replication, let’s tweak our analogy. Now instead of being in a burger eating competition, you are in a book reading competition. You need to be able to read 1000 books and know what books have been read. As before, we have spread the books into 4 different boxes in two different buildings. Let’s say Building 1 has Box A and B and Building 2 has box C and D.  If Building 1 has a fault, all the books in Boxes A and B cannot be read, as we can only read books in Boxes C and D in Building 2. While Building 1 is down, if the two people that were initially reading the books in boxes A and B decide to go to Building 2, they both do not have access to the books they were reading or had read. They can only start reading books in Boxes C and D. This means the system are not fully <strong>fault-tolerant.</strong> To resolve this, we can make our system more robust by taking a copy of Box A and B and duplicating it Building 2. Let’s call these new boxes Ax and Bx. With this change, if Building 1 is down, the two people can go to Building 2  and continue reading from boxes Ax and Bx. This is called replication.</p>
<p>Replication essentially takes a copy of the partitions and copies them over to other brokers for improved availability. When a topic is created, the replication factor can be defined, which is a number indicating how many copies of each partition we want on the brokers.  The default replication factor is 1 meaning there is no replication, so only one copy of each partition for that topic will ever be created. The replication factor cannot be greater than the number of brokers because there needs to be a corresponding broker for each replica. One of these copies is assigned as the <strong>leader</strong>, and the rest are followers. When the producer writes to a partition, Kafka writes to the broker designated as the leader of that partition and then propagates this data to the followers. In the same manner, when consumers read from the partition, they only read from the leader node. If the leader broker fails for any reason, Kafka promotes one of the followers to be a new leader.</p>
<p>Acks (or acknowledgments) is a way for the broker to inform the producer that it has received a message. Depending on the producer acks configuration, replication can considerably increase latency. For example, if the acks setting is set to 'all',  now the producer needs to wait for all the follower brokers to acknowledge receiving the message before marking the request as complete. That said, irrespective of the acks setting, consumers cannot see a message until it has been fully propagated to all the follower nodes.</p>
<p>In the image below, we have 3 brokers in the cluster. Topic A has 2 partitions, so each partition lives in a different broker e.g Partition 0 an 1 for Topic A are placed in Brokers 1 and 2 respectively</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704203080257/b7c3cf12-3260-4c22-9759-bcb394506f7a.png" alt="Kafka Partitions" class="image--center mx-auto" /></p>
<p>In this next image, we introduce replication. With two partitions and a replication factor of 2, both partitions for Topic A are now copied into both Brokers 1 and 2. Because we have fewer partitions than brokers, some brokers would not have partitons for this topic</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704203249408/fabfe469-35fd-4fe2-a691-c19d6ec41636.png" alt="Kafka 2 partitions with two replicas" class="image--center mx-auto" /></p>
<p>What if we have 3 partitions and we also want to a replication factor of 2? In the image below, observe that all three partitions are copied to the brokers. Partition 0 is replicated on Broker 1 and Broker 3, and the other partitions are similarly replicated. For brevity, other topics are not shown</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704203502168/b768332c-2931-4540-a2e5-87c02ad620c1.png" alt="Kafka 3 partitions with two replicas" class="image--center mx-auto" /></p>
<p>It is also possible to have a lot more partitions than brokers, as shown in the next image. In this case, we have 4 partitions and a replication factor of 3 (not possible to have more replicas than brokers). Notice how a copy of each partition is placed in every broker.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704203892901/f6b2da47-770a-419f-8125-8a30a3d2dfdd.png" alt class="image--center mx-auto" /></p>
<p>Armed with this knowledge, it is crucial to visualize messages at the partition level - meaning producing, consuming and offsets are all for a partition and not the entire topic.  This means changing the number of partitions or replicas on an existing topic can become tricky since existing consumers would be depending on the partitions. The best way to resolve this is to use a streaming transformation to automatically stream all the messages from the original topic into a new Kafka topic which has the desired number of partitions or replicas as explained <a target="_blank" href="https://developer.confluent.io/tutorials/change-topic-partitions-replicas/ksql.html">here</a>.</p>
<p>With the introduction of partitions, how would the consumers subscribe to a particular partition? Would the consumers need to keep track of the partition number before reading? Kafka solves this with the concept of a consumer group.</p>
<h3 id="heading-consumer-group"><strong>Consumer Group</strong></h3>
<p>The previous sections described a simplistic approach to message consumption at the topic level without considering partitions. Now, with our data partitioned, we can fully leverage parallelization by deploying multiple consumers that read from specific partitions. These consumers are organized into a Kafka abstraction known as a Consumer Group. To group consumers, specify a group ID when creating the consumer, as shown below:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> kafka <span class="hljs-keyword">import</span> KafkaConsumer

consumer = KafkaConsumer(
    <span class="hljs-string">'topic1'</span>,
    group_id=<span class="hljs-string">'my-group-id'</span>,
    bootstrap_servers=<span class="hljs-string">'your_kafka_bootstrap_servers'</span>
    ...
)
</code></pre>
<p>Kafka ensures that only a single consumer within a consumer group can read from a partition within a topic, preventing duplication of data for a given partition. This avoids potential race conditions in parallel processing, where multiple consumers within the same group might work on messages from different partitions simultaneously.</p>
<p>If the number of consumers exceeds the number of partitions, surplus consumers remain idle until one of the consumers dies. Conversely, suppose we have a topic with 8 partitions and initially start 3 consumers in the same consumer group. During a rebalance, Kafka may assign each consumer to 2 partitions, leaving 2 partitions unassigned. If a fourth consumer is later added, Kafka will rebalance and assign each consumer to 2 partitions, achieving a balanced assignment. This opens up some interesting ideas:</p>
<blockquote>
<p>Placing all consumers in a single group transforms Kafka into a queue, as a partition can only be read by one consumer in the group. On the other hand, placing each consumer in its own group makes Kafka act as a Pub/Sub system, allowing each group to access the same message.</p>
</blockquote>
<p>Kafka designates one of the brokers as the group coordinator, responsible for decisions on partition assignment when new consumers join and reassignment when a consumer leaves. The coordinator, informed by periodic heartbeat requests from consumers, manages the dynamic state of the consumer group.</p>
<h2 id="heading-references">References</h2>
<ul>
<li><p><a target="_blank" href="https://kafka.apache.org/documentation/">https://kafka.apache.org/documentation/</a></p>
</li>
<li><p><a target="_blank" href="https://docs.confluent.io/kafka/">https://docs.confluent.io/kafka/</a></p>
</li>
<li><p><a target="_blank" href="https://www.oreilly.com/library/view/kafka-the-definitive/9781491936153/ch04.html">https://www.oreilly.com/library/view/kafka-the-definitive/9781491936153/ch04.html</a></p>
</li>
<li><p><a target="_blank" href="https://www.oreilly.com/library/view/kafka-the-definitive/9781491936153/ch04.html">https://stackoverflow.com/questions/60550839/how-to-dynamically-add-consumers-in-consumer-group-kafka</a></p>
</li>
<li><p><a target="_blank" href="https://www.oreilly.com/library/view/kafka-the-definitive/9781491936153/ch04.html">https://medium.com/@rramiz.rraza/kafka-programming-different-ways-to-commit-offsets-7bcd179b225a</a></p>
</li>
<li><p><a target="_blank" href="https://blog.developer.adobe.com/exploring-kafka-producers-internals-37411b647d0f">https://blog.developer.adobe.com/exploring-kafka-producers-internals-37411b647d0f</a></p>
</li>
<li><p><a target="_blank" href="https://www.techpoolx.com/blog/the-side-effect-of-fetching-kafka-topic-metadata.html">https://www.techpoolx.com/blog/the-side-effect-of-fetching-kafka-topic-metadata.html</a></p>
</li>
<li><p><a target="_blank" href="https://www.conduktor.io/kafka/kafka-topic-replication/#Kafka-Topic-Replication-Factor-0">https://www.conduktor.io/kafka/kafka-topic-replication/#Kafka-Topic-Replication-Factor-0</a></p>
</li>
<li><p><a target="_blank" href="https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client">https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Time Waits for No One - Until You Freeze It]]></title><description><![CDATA[In JavaScript, setTimeout feels like an unbreakable contract with the future. But what if I told you that you could pause that future? That you can stop the relentless march of the event loop, giving yourself all the time in the world?
This isn't a t...]]></description><link>https://ladmerc.com/time-waits-for-no-one-until-you-freeze-it</link><guid isPermaLink="true">https://ladmerc.com/time-waits-for-no-one-until-you-freeze-it</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[Event Loop]]></category><category><![CDATA[SetTimeout]]></category><category><![CDATA[setInterval]]></category><category><![CDATA[#microtaskqueue]]></category><dc:creator><![CDATA[Ladna Meke]]></dc:creator><pubDate>Sat, 14 May 2022 16:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756040983205/c1c6aaaa-1a29-4ec5-b582-d78c9e2c3c27.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In JavaScript, <code>setTimeout</code> feels like an unbreakable contract with the future. But what if I told you that you could pause that future? That you can stop the relentless march of the event loop, giving yourself all the time in the world?</p>
<p>This isn't a theoretical exercise; it's a fundamental consequence of how browsers handle JavaScript execution. But before we learn how to stop time completely, it's important to understand that time in JavaScript is already a surprisingly flexible concept. The timers we use are governed by a fascinating set of rules that bend, stretch, and warp our expectations.</p>
<hr />
<h2 id="heading-the-rules-of-bending-time">The Rules of Bending Time</h2>
<p>The delay you provide to a timer is more of a suggestion than a command. Several factors determine its actual execution time, revealing that our control over the timeline is limited.</p>
<h3 id="heading-the-illusion-of-precision-tasks-vs-microtasks">The Illusion of Precision: Tasks vs. Microtasks</h3>
<p>The event loop manages at least two different queues: the Task Queue for things like <code>setTimeout</code> callbacks, and the Microtask Queue for promise callbacks (<code>.then()</code>). After any script runs, the event loop will completely empty the Microtask Queue before processing a single item from the Task Queue.</p>
<p>This gives promises a higher execution priority and explains why this code:</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">setTimeout</span>(<span class="hljs-function">() =&gt;</span> <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Timeout (Task)'</span>), <span class="hljs-number">0</span>);
<span class="hljs-built_in">Promise</span>.resolve().then(<span class="hljs-function">() =&gt;</span> <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Promise (Microtask)'</span>));
<span class="hljs-comment">// Output:</span>
<span class="hljs-comment">// Promise (Microtask)</span>
<span class="hljs-comment">// Timeout (Task)</span>
</code></pre>
<p>...logs the promise first, even though both are scheduled for "as soon as possible."</p>
<h3 id="heading-browser-imposed-slowdowns-clamping-and-throttling">Browser-Imposed Slowdowns: Clamping and Throttling</h3>
<p>Browsers actively manipulate your timers to improve performance and save battery life.</p>
<ul>
<li><p><strong>The 4ms Clamp:</strong> As defined by the HTML spec, after five levels of nested <code>setTimeout</code> calls, the browser will clamp the timeout to a minimum of 4 milliseconds, preventing high-frequency recursive loops from hogging the CPU.<sup>1</sup></p>
</li>
<li><p><strong>Inactive Tab Throttling:</strong> To conserve resources, browsers will aggressively throttle timers in background tabs, often limiting them to running no more than once per second. Your application's time literally slows down when it's not in focus.</p>
</li>
</ul>
<h3 id="heading-the-248-day-limit-integer-overflow">The 24.8-Day Limit: Integer Overflow</h3>
<p>The maximum delay for <code>setTimeout</code> is governed by a 32-bit signed integer, which has a max value of <code>2,147,483,647</code> milliseconds. If you set a timer for longer than roughly 24.8 days, the value overflows into a negative number, and the timer fires almost instantly. Most engines treat a negative delay as <code>0</code></p>
<hr />
<h2 id="heading-how-to-freeze-the-event-loop">How to Freeze the Event Loop ⏸️</h2>
<p>So, we've seen that browsers can bend and warp time. But what about stopping it completely? This is where we move from bending the rules to breaking them.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756040562509/53d390b7-cb52-4794-a3ea-d9e4d3b083f7.png" alt class="image--center mx-auto" /></p>
<p>Imagine a time-boxed online quiz. Here are two ways a user can grant themselves infinite time.</p>
<h3 id="heading-method-1-the-modal-prompt-alert-confirm">Method 1: The Modal Prompt (<code>alert</code>, <code>confirm</code>)</h3>
<p>These aren't ordinary functions. They are instructions to the browser's UI to create a modal window. A modal demands user attention, and to enforce this, the browser freezes the rendering and scripting thread of that tab. While the alert is on-screen, the event loop is completely blocked, unable to process the <code>submitQuiz()</code> callback waiting in the queue.</p>
<h3 id="heading-method-2-the-browser-debugger">Method 2: The Browser Debugger</h3>
<p>An even more powerful tool is the debugger. By opening the developer tools and placing a breakpoint or clicking the "pause script execution" button, a user pauses the JavaScript engine itself. The entire execution context is frozen in place, achieving the same time-stopping effect as <code>alert()</code>.</p>
<p>The core reason this works is that JavaScript's event loop is non-preemptive. It can only run a new task when the previous one has completed. A blocking alert or a debugger pause never allows the current task to "complete," thus starving the event loop.</p>
<hr />
<h2 id="heading-the-machinery-why-freezing-works">The Machinery: Why "Freezing" Works</h2>
<p>So, is the event loop <em>literally</em> frozen when you call <code>alert()</code>? It’s a great question with a nuanced answer. While the observable effect is a complete halt, the component that's truly locked up is the main execution thread.</p>
<p>Here’s the step-by-step of what happens:</p>
<ol>
<li><p><strong>The Main Thread is Blocked:</strong> When a synchronous function like <code>alert()</code> is called, it's pushed onto the call stack. It will not finish its execution (and be popped off the stack) until the user provides input (e.g., clicks "OK"). The main thread is now completely occupied, stuck inside this single task.</p>
</li>
<li><p><strong>The Event Loop is Starved:</strong> The event loop has one primary job: check if the call stack is empty. If it is, move the next task from the task queue onto the stack to be executed. Since the call stack is perpetually occupied by <code>alert()</code>, the event loop's condition is never met. It's effectively stuck, unable to perform its function.</p>
</li>
</ol>
<p>Think of the main thread as a worker on an assembly line and the event loop as the manager who places new items on the belt. If the worker gets stuck on a single, difficult item (<code>alert()</code>), they can't move on. The manager (<code>event loop</code>) sees the worker is busy and simply waits, unable to place any new items on the belt.</p>
<p>So, while the main thread is the prisoner, the entire asynchronous system is brought to a standstill. For this reason, "freezing the event loop" is a functionally accurate and effective way to describe the outcome.</p>
<hr />
<h2 id="heading-a-final-piece-of-trivia">A Final Piece of Trivia</h2>
<p>Did you know you can use <code>clearTimeout()</code> to cancel a <code>setInterval()</code> and vice-versa?</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> myInterval = <span class="hljs-built_in">setInterval</span>(<span class="hljs-function">() =&gt;</span> <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'tick'</span>), <span class="hljs-number">1000</span>);

<span class="hljs-comment">// This works perfectly!</span>
<span class="hljs-built_in">clearTimeout</span>(myInterval);
</code></pre>
<p>According to the HTML spec, <code>setTimeout</code> and <code>setInterval</code> share the same pool of timer IDs, making their clearing functions interchangeable.</p>
<hr />
<h2 id="heading-conclusion-you-cant-freeze-a-server">Conclusion: You Can't Freeze a Server</h2>
<p>The ability to pause, delay, and manipulate the client-side clock demonstrates a core security principle: <strong>never trust the client</strong>. While you can freeze your browser's perception of time, you can't stop the clock on the server. For anything that requires integrity: quizzes, auctions, session expirations, the server must remain the single, un-freezable source of truth.</p>
]]></content:encoded></item><item><title><![CDATA[El Cashico - When Attack Meets Defense and Money Speaks]]></title><description><![CDATA[This post was originally published on my old blog.

Arsene Wenger and his army have topped the Premier League for a considerable number of months, but the bookies still fancy Manchester City or Chelsea to grab the title. This shows the importance att...]]></description><link>https://ladmerc.com/el-cashico-when-attack-meets-defense-and-money-speaks</link><guid isPermaLink="true">https://ladmerc.com/el-cashico-when-attack-meets-defense-and-money-speaks</guid><category><![CDATA[Mourinho]]></category><category><![CDATA[Chelsea]]></category><category><![CDATA[Manchester City]]></category><category><![CDATA[Premier League]]></category><category><![CDATA[sports]]></category><category><![CDATA[football]]></category><dc:creator><![CDATA[Ladna Meke]]></dc:creator><pubDate>Mon, 03 Feb 2014 17:00:00 GMT</pubDate><content:encoded><![CDATA[<p><em>This post was originally published on my</em> <a target="_blank" href="https://ladmerc.wordpress.com/2014/02/03/el-cashico-when-attack-meets-defense-and-money-speaks/"><em>old blog.</em></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755997912543/03cf6ccd-54e7-4ce4-8f0c-9ec86d9827bc.jpeg" alt class="image--center mx-auto" /></p>
<p>Arsene Wenger and his army have topped the Premier League for a considerable number of months, but the bookies still fancy Manchester City or Chelsea to grab the title. This shows the importance attached to today's fixture: the league's meanest attack coming against the tidiest defense, in a tale where money (in the form of world class players) speaks.</p>
<p>With Arsenal overthrowing City at the summit of the league after their victory against Crystal Palace on Sunday, the onus is on either of the cash-loaded clubs to work their way up. The home team's last away hammering of Tottenham has done well to remind Mourinho's youngsters just how much ruthless they can be. In this fixture now generally dubbed as El Cashico, a win, loss or draw for either club might just shift the advantage to any of the current top three teams. With the clubs boasting huge talents - Manchester City's wage bill for 2012/2013 was £202 million while Chelsea's was £173 million - expect this game to be a cracker.</p>
<p><strong>Manchester City</strong></p>
<p>Pellegrini's side has been in imperious form at home this season. They're enjoying the scintillating home form that has previously been the trademark of their red neighbour. The Etihad is now a fortress and does not bode well for visiting teams - they boast an 100 percent record this season!</p>
<p>Perhaps the scariest part of this record is that they have scored a whooping 42 goals at home and conceded just 8 goals. In total this term, they have scored 66 goals and are well on course to break the goals record created by today's opponent, Chelsea. Their lethal attack has placed three of their players in the top 10 highest goalscorers - the most of any team. In January alone, they scored 35 goals against as many teams they faced. The team has averaged 325 chances created and 86 percent pass completion, meaning Mourinho's midfield would have to work harder to get something from this game. With the likes of Fernandinho and the impeccable Yaya Toure sure to start today, it becomes all the more an uphill task for Chelsea.</p>
<p>Manchester City are not afraid to bang the goals in past big teams, as Arsenal, Manchester United and Tottenham can testify. A series of slow shaky away performances have been replaced with recent superb performances and if City can keep this up, the sky is the limit. The big stat in City's favour is that they have won their last four (4) home league fixtures against Chelsea.</p>
<p><strong>Chelsea</strong></p>
<p>If there is one team that can alter Manchester City's domineering run, it is Chelsea. Statistically, Chelsea boasts the best defense in the league as they have conceded just 20 goals all season (although they're yet to taste Manchester City at Etihad - as Arsenal has already done). The central defensive pairing of Gary Cahill and the experienced John Terry has been resolute. Captain John Terry has rolled back the years and his sidekicks at the full back are sizzling as well - enter Ivanovic and Azpilicueta.</p>
<p>They have ensured Chelsea has kept three consecutive away clean sheets and this is a big bonus going into this game. Also, they have conceded just three goals since December 7 and have racked up nine clean sheets this season. Mourinho is well known for his defensive rigidity and that is exactly what he is building.</p>
<p>Despite today's game being an 'Attack vs Defense' game, the bulk of the game will stem from the midfield. The London team has a 52 percent average duel success and this shows they can break opposition attacks well; this is where Nemanja Matic and Ramires come in play. Not to be undone by their opponent, Chelsea boasts an 83 percent pass success rate which is slightly less than City's. In addition, Eden Hazard is a 'foul-magnet', getting fouled 66 times, he is the most fouled player in the league and with City's relatively higher foul rate, Chelsea can rely on free kicks to help their cause. This is one of the dimensions of Chelsea's new style of play - by getting the ball more to Hazard to dribble or draw foul. With just one defeat in the previous 13 League outings and with the 39 shots they accrued against West Ham, Chelsea are looking a strong team, even without a proper striker.</p>
<p>This is going to be one tough game to predict, and even the bookies are not sure the right odds to place. With money, both teams boast world class players and coaches. On one side, City looks destined to beat Chelsea. On the other side, Mourinho's stubbornness and shrewdness coupled with his team's form tilts the balance to his favour. Perhaps, this might be the time for Pellegrini to finally beat Mourinho only the second time, or for Mourinho to extend his win run against the Chilean.</p>
]]></content:encoded></item><item><title><![CDATA[Dissecting the Juan Mata Conundrum]]></title><description><![CDATA[This post was originally published on my old blog

There is one thing that is obvious, which is I can't change the rules and I can't start the match with 12, 13 or 14. I'd love that because many people deserve to play and Juan is one who deserves to ...]]></description><link>https://ladmerc.com/dissecting-the-juan-mata-conundrum</link><guid isPermaLink="true">https://ladmerc.com/dissecting-the-juan-mata-conundrum</guid><category><![CDATA[Chelsea]]></category><category><![CDATA[football]]></category><category><![CDATA[Manchester United]]></category><category><![CDATA[Premier League]]></category><dc:creator><![CDATA[Ladna Meke]]></dc:creator><pubDate>Sun, 19 Jan 2014 17:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756037370482/647b0eae-598d-4a88-b0a9-4725c0764500.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This post was originally published on my <a target="_blank" href="https://ladmerc.wordpress.com/2014/01/19/dissecting-the-juan-mata-conundrum/">old blog</a></p>
<blockquote>
<p>There is one thing that is obvious, which is I can't change the rules and I can't start the match with 12, 13 or 14. I'd love that because many people deserve to play and Juan is one who deserves to play because of the way he works, behaves, the way we see him every day. But if I play him and I don't play (Eden) Hazard or Oscar, you are asking me about Hazard or Oscar in this moment. It's a consequence of the squad we have where, for these positions behind the striker, we have so many options. It's hard for me. I don't have a special pleasure leaving players out. I enjoy playing them and making them happy, but there's nothing I can do.</p>
</blockquote>
<p>- Jose Mourinho, November 2013</p>
<p>When it was announced that Mourinho would be taking over the helm from Rafa Benitez, few would have expected Azpilicueta to start more games at left back than Ashley Cole. Even fewer would have expected Mata to be resigned to just three starts in as many games. None would have expected Mikel Obi to have scored more goals than Mata at this time, but that is the current situation. Juan Mata, famously Chelsea's Player of the Year for the past two seasons, finds himself resigned to the bench, a place he's only well accustomed to when playing for Spain.</p>
<p>On the surface, it looks like Mourinho's decision is more personal than professional, but I see no yardstick for such conclusion. Some conspiracy theorists even suggested that Mourinho was against Spanish players; but his apparent belief in Fernando Torres, coupled with his decision to pick Azpilicueta over Ashley Cole at left back blatantly indicates otherwise. Mourinho has a knack for controversy and he's no stranger to high-profile radical decisions. If you take Mourinho's word for it and keep sentiments aside, you'd see he has a point. For a player that can easily walk into most teams in the world to be left in the shade, there has to be a rational reason.</p>
<p>From the moment news came that Chelsea was pursuing Willian, it was evident one of the midfield triumvirate was going to be left out. When a coach comes into a club in the off-season period, he spends time to analyse his players and see those he can work with. During training sessions, he notes the abilities of each players and picks his team using that ability. Of course, in the course of the season, a coach's preferred starting lineup might change when players' attitude changes in training. Mourinho simply iterated that his decision to fade out Mata was because he (Mata) does less defensive work than his midfield counterparts, however amazingly talented he is offensively.</p>
<p>Oscar, Hazard and Willian are in sizzling form. Mourinho's preferred formation is 4-2-3-1, meaning there's only room for three playmakers behind the striker. Oscar made 109 tackles in the 2012-2013 season, compared to 69 tackles from Mata and Hazard combined. Surely, Mourinho observed this defensive work he does! This season, he has a 61% successful tackle rate. He loves to shoot and is arguably Chelsea's best long ranger behind Lampard - this season alone he has racked up 34 shots. He also has the added advantage of using the Confederations Cup to woo Mourinho to his side, hence Mourinho already saw his work rate competitively. Oscar could easily be Chelsea's next Frank Lampard as he has all the qualities to get him there. He thrives when he plays the No. 10 position, just behind the striker - which coherently is Mata's preferred position too.</p>
<p>If Oscar had sent out a message that he was the man of the moment, Hazard has done well to show otherwise. The young Belgian possesses a quality that none of his other competitors have - his quick feet and mesmerising dribbling (he has completed 65 dribbles, more dribbles than any other player in the league). This brings a different flavour to Chelsea's attack and this somehow makes him untouchable. Not only is he one of only two players in the league to have surpassed 100 attacking contributions (assists, dribbles, crosses, shots on goal), he has the second best shot accuracy rate of any player in the league to have had more than 25 shots, with 66.7%. His talents make the opposition drawn to him thereby creating space for the rest of his teammates. With a sudden, explosive change of pace or a powerful curler, Hazard can change a game in seconds.</p>
<p>When a club with many midfield talents buys another midfielder for £30million, one would expect the new player to feature often. In all honesty, this was a no-brianer! William was expected to play often otherwise he would not have been bought at such fee. Willian is the most hardworking of the trio - he has played less games than Mata but has made more tackles - something that Mourinho craves . He is a player that can hold unto the ball and fits perfectly for Mourinho's aim of mixing rigidity with flair. If Mata (and maybe Schurle) was to displace any one of the midfield trio, it would have been Willian. However, his superb work rate makes him the best fit for Mourinho's desire for 'hardworking' players. He, like Ramires, never gets tired of running and chasing down oppositions when his team isn't with the ball, a reminisce of Arjen Robben. If Mourinho was looking for a hard worker, he certainly got one!</p>
<p>Having said this, we can see why Mata is struggling to fight his way through. The trinity of Oscar, Hazard and Willian are performing and it'd be unfair to blot anyone of them from the starting lineup. It has to be said though that Mata is an extremely gifted player - blessed with a sublime first touch, ability to give eye popping passes, very deadly at delivering those set pieces, calm and has a great personality. He is very good when his team is on the ball, but when the opponents are in possession, Mata isn't the one you can rely on to win the ball. The game literally passes by him until a twist of fate brings the ball to his teammate. However, he makes a team tick very much offensively, no wonder Mourinho keeps reiterating his desire to keep him, insisting he has 'a big role' to play.</p>
<p><img src="http://ladmerc.files.wordpress.com/2014/01/image12.jpg?w=300" alt="The Deadly Trio: Oscar, Hazard and Willian are I'm scintillating form and it'd be unfair to faze anyone out for Mata " /></p>
<p>What then is the way forward for the Spaniard? His talent is one that's needed in very many (if not all) teams, Chelsea included. Mata has a wealth of clubs to pick from, should he finally decide his time at Stamford Bridge is over. Chelsea will be understandably unwilling to sell a player of such calibre to rivals team as Arsenal and Manchester United. The first club has a rich array of world class midfielders so it is unlikely that they'd need him. The latter club is well in need of Mata's services, and valued at £30 million, he will be a good buy. He can play behind the striker and sometimes operate from the flanks, giving room for the wingers to dip in the crosses, United style. Again, Chelsea don't need to sell Mata to a rival club; they don't need to sell him at all. Other clubs like Atletico Madrid and PSG could see Mata ply his trade with them, but if reports from the press are anything to go by, it seems likely that Juan Mata will wait till the summer and force a move to Bayern Munich; after all, it isn't like he is guaranteed a berth in the Spanish team, come Brazil 2014.</p>
<p>He is undoubtedly the best player not getting what he deserves in Europe, but for the right reasons for Mourinho. With time, I hope his situation is resolved because such talent should not be left to rot.</p>
<p>Thanks for reading. If you found this interesting, please share. Comments and constructive criticisms are welcome in the comments.</p>
]]></content:encoded></item></channel></rss>