Google Beam: A first encounter with Google’s revolutionary telepresence tech

Google Beam: A first encounter with Google’s revolutionary telepresence tech

I’ve seen a lot of collaborative tech over the years. Enough to know the pattern: the anticipation, the demo, the “if only this were 10 years from now, when bandwidth is better” conversation. We’ve all been there, watching potential trapped in beta, waiting for Moore’s Law to catch up, only to see the original concept superseded before it ever reaches maturity.

So when I finally stepped into a closed-door booth at ISE 2026 for my one-on-one showcase of Google Beam, I was fully prepared for another tech demo. What I wasn’t prepared for was the moment my brain would need to recalibrate what tele-“presence” actually means.

I first came across Google Beam through an article just a few months back. Formally dubbed “Project Starline“, Beam promised immersion and presence in one-to-one communications through a batch of new technology, backed by HP and Google. I made a mental note to see it at the next chance. Then came ISE.

For anyone who hasn’t visited, Integrated Systems Europe is a huge gathering of AV innovation, from digital whiteboards to AI-driven stage production and all kinds of collaboration platforms. Each year, the industry seems determined to outdo itself. ISE 2026 was one of those years.

The Beam demo required advance booking, which was unusual for an on-booth demo. I soon realised this was to ensure it was approached respectfully and thoughtfully, a fact I only appreciated afterwards.

The first encounter: Santa’s Grotto

The Google and HP teams at the booth were relaxed, professional, and jovial, as I waited in line with a queue of very excited tech-type people full of anticipation. I watched as each entered one at a time and emerged a minute or two later, dazed and smiling, as if Santa had just promised them a new bike.

When my turn came, I opened the door to an empty booth. I was alone, apart from a large screen – what looked like an average-sized TV monitor – and a wooden laminate table and office chair. Nothing special. I sat down.

Then something truly amazing happened – a visual event that made my brain buzz, like a hangar door had just opened in my mind.

The screen warmed up (or did my mind just want it to, like in those 1930s Buster Crabbe Flast Gordon serials?). Years of experience with VR and 3D technology kicked in as I assessed the image. I felt there was very fine pixelation, but only because I was leaning forward, on tech alert.

There in front of me was “Brett” (Google’s Brett Highley of Google’s business development team). Bright. In 3D. Smiling.

Initially, it felt like a clever illusion, impressive 3D technology, lifelike and true to size. But everything shifted the instant Brett began to speak. I snapped from observer to participant. The image before me was clearly digital, yet inexplicably concrete.

It was something else besides a big-screen Zoom experience. There was something else going on, and it was subtly profound. We were conversing.

I watched him hold up an apple. I don’t remember what I said. I was watching, studying, looking at the tech, but feeling a connection happening. Why? Because it was so revealing. His body language. It felt personal, intimate, human.

Then it was over. Two minutes. The door opened. I was whisked out, still wanting to peek behind the curtain, to see the edges of the magic. Would I be able to peer down and see his torso cut off at the bottom of the monitor, killing the illusion? I never got the chance.

I was intrigued. I hadn’t had enough time to process what I’d experienced. My mind evaluates new experiences on many levels, and I’d only had time to get past the pixels. I automatically look for flaws when seeing new tech. But with the first Beam demo, I hadn’t had enough time. I was still processing as I was leaving.

I needed more.

The second demo: When it clicked

The next day was the last day of the ISE show. It was quieter, with much yawning among worn personnel. I made it to the Google stand for my 11:30 appointment, and Mike Karr, a Global Solutions Architect at Google and longtime Starline project architect, introduced himself. I came prepared with tech questions, but a new twist changed things: I would be talking to him through Beam.

This changed everything. This guy physically standing beside me was going to conduct the interview from a identical Beam set up in the adjacent room, although it could have been located anywhere else at the show, in the city, on the planet. Maybe even in space. But for the demo, he was on the other side of the wall. He would be seeing me on his screen in 3D, as I would be seeing him on mine. I knew I would never see my own projection on his Beam, but the thought lingered, and I wondered what my ‘digital presence’ would have looked like, a perspiring, convention-weary version of me or maybe somehow an added perkiness with some light digital enhancement, cleverly disassembled and rebuilt in the Google Cloud… I digress.

As we waited to go in, Mike said something that perfectly captured the arc of experiencing Beam: “My personal feeling is that you need three demos to really feel the heart of what it is. The first time, you can’t believe what you just saw. The second time, you believe it. And the third time? You’re in.”

I stepped back into the booth for round two and took a seat. Mike appeared in front of me on the screen. Now I had a reference – I’d just been talking to him outside, in person, seconds ago. On the screen, he was brighter, more defined, and the detail was intense, as if someone had carefully lit him to perfection. Skin blemishes, pores, smile lines. A perfect reproduction.

I asked tech questions, and he was showing me graphical representations of the six cameras housed in the screen bezel. I was listening, but I was more watching, seeing this incredible vision in front of me. I wheeled back in the chair a little to clear any pixelation, to see Mike in full. He was there, as if on the other side of a window, maybe two feet away.

We were talking. I was getting answers. He was relaxed, no rush. I felt at ease.

Then I realised: we were having a chat. A human chat. Like in the pub (although somewhat clinical in this case). We were two people simply having a conversation in the same space.

He moved around in the chair while talking, showing how the tech could track him and how the audio moved so his voice was directed at me depending on where he was projecting. It was seamless, invisible.

Only afterwards did I think about how advanced the tech must be, how many super-intelligent people it took to get that sound source to realistically track, and how many hours, days, and years it took to achieve that demo. But in the moment, I lost track of time, forgetting that others were waiting for their turn. I had been so convinced and completely engrossed I forgot I wasn’t just talking to someone over a table. Mike suggested we continue our discussion outside, so he got up (again, I missed the chance to check for the floating torso).

I realized this was more than just a tech demo. This had passed a point in digital experience. Something significant had happened in that conversation with Mike.

Behind the illusion

I’ll try to explain the technology in layman’s terms, as it’s easy to fall into a nerdy wormhole with Beam. Google supplies the AI processing, while HP supplies the hardware, called Dimension. The system uses six depth-sensing cameras that map you in 3D, constantly scanning your shape, your movements, every detail. That information gets sent to Google’s data centre, where AI rebuilds you in real time before sending it back to the other person’s Beam.

The screen display is called a “light field” that projects many slightly different views of the person in front of it. These views appear closer or farther away, building a moving, ever-changing 3D image that convinces your eyes and brain into believing there’s a living, breathing person sitting there.

Combined with Google’s cloud-based processing, the result is an example of humankind’s tenacious capacity to create things that, only a few years ago, would have seemed like science fiction. How many times did I hear the word ‘Holodeck’ mentioned? I’m sure Gene Roddenberry would have loved Beam.

The cameras also track where you’re looking, predicting where you’ll move next to eliminate lag. This is important, as any issues with the image in front would kind of shatter the illusion. As Mike demonstrated, an array of microphones follows your position in the space, four speakers working together to ensure the voice precisely matches the position of your mouth. Even if the head is turned, the voice is recreated to match the experience, no matter where they are in the frame.

The technical achievement is staggering. But here’s what surprised me most: I wasn’t focused on the technology. Usually, that’s what drives my interest – seeing what’s next and understanding how it works. But this time, my surprise wasn’t about the tech, though I’m still intrigued by the magic. For once, the product spoke for itself. It was like the technology didn’t matter. Maybe it’s some kind of marker. Are we at this stage now in 2026, where innovation becomes secondary?

As Greg Baribault from HP explained in a seminar I had attended earlier in the day, “Part of what we define as presence, in the sense of virtual presence, comes from technology completely disappearing.” That is exactly what happened.

What needs work

The sound was really the only element that didn’t fit my expectations. I thought Mike’s voice sounded a little muted, definitely digital, and without some very important human nuances. It was good but limited, and I’m not sure whether that was because the demo space wasn’t quite purpose-built.

It lacked silence. For true intimacy of real human contact, it needs silence, a sigh, a breath, a light finger tap, the sound of swallowing after words, even a blink. Without this, that essential human quality can’t fully come through. With this impressive visual technology, I would expect the sound to match. It needs to be pin-drop clear. I’m sure this is a simple update.

You could also say the resolution wasn’t quite there at very close quarters. You still need to be some distance from the screen for the pixelation not to be a small distraction; the sweet spot seemed to be just over two feet.

That has always been the reality block with virtual reality tech – seeing pixels kind of kills the illusion. Like all TVs, there’s a safe distance where the human eye cannot discern the pixels, and the display becomes a full canvas of continuous, unbroken colour and detail.

But the feeling of presence was real this time. I didn’t have to suspend reality mentally to complete the illusion, as with previous 3D experiences. This time, it had crossed into the convincing.

Why this isn’t VR

This wasn’t the isolated personal environment of typical, headset VR, closed off from reality, with no physical connection to the real world except through a handset. Even with the latest AR tools, the headsets are very present and cumbersome. It creates worlds using avatars in clunky, completely artificial environments. My appreciation for VR has been severely bruised!

Beam required no headset. The 1-to-1 experience is two humans face-to-face. It’s still a digital experience, but not in a closed headset environment. I’ve been told that people with visual impairments who have found it difficult to experience VR have had a more successful experience with Google Beam. Also, the problem of motion sickness in VR due to body gyros competing with the VR image is eliminated.

As Vasu Subrahmanian, Head of Sales for Google Beam, explained in the seminar at ISE show: “The majority of communication is actually non-verbal. It’s less about the words that I’m saying, and it’s more about my body language, my presence, my ability to maintain eye contact when I’m talking.” She continued, “You could try to remember an in-person meeting that you had, maybe even here at ISE. You’d be able to remember it. You’d remember the room you’re in, you’d remember the person, you’d remember some details. Now, if I asked you to try to remember a recent video call that you’ve had, I personally cannot remember many of my video calls with the same amount of clarity.”

She was absolutely right. The first demo stuck with me so vividly that I made another appointment for the next day, the last day of the show. I needed to experience it again, to understand it better. How many video calls have I forgotten the moment they ended? Countless. But that first two-minute encounter with Brett? I remembered every detail. The apple he held up. The feeling in my chest when my brain had to recalibrate. The frustration of being ushered out before I could see the edges of the illusion.

That’s the proof right there. Beam creates memories the way in-person meetings do. It’s not just another video call you’ll forget by tomorrow.

Beam is not VR. It’s something else entirely.

The human connection

Simple, intuitive interaction. There was nothing to prepare, no headset, no handset. Just walk in and interact.

Right now, simply sitting and talking with someone in Beam doesn’t require physical contact, but I can see where this could go, the additional opportunity to touch in harmony with Beam. Would a handshake be possible, a tender hand squeeze over continents in synchronisation? These are the simple gestures of human encounters we have enjoyed for millennia. The technology sparks these thoughts. Always progress.

I noticed the body language. I could feel the person’s personality. How? Visual cues in the face, of course: eyes, mouth, the pallor. But something more than facial cues. Fingers, hand movements, and small shifts in posture on the seat. Maybe a tapping foot. These are all visual cues and body language that can be communicated with Beam.

Even a pause in conversation and silence is not treated simply as a pause in graphics when digital artefacts start to appear to save bandwidth. Silence doesn’t mean the end of a conversation at all. How much is communicated between two people during silence is something you could measure with Beam. Visual cues, maybe narrowing of the eyes, a drop of the head, a knowing gaze, a tear. In three dimensions, there seems to be such a huge amplification beyond the two-dimensional video chats we’ve got so used to.

I took the bus home from the event, and I was inspired. I found myself people-watching and noticing the visual cues—the real-world cues that are translated in Beam. Someone looking out the window, nervous twisting of hair, maybe the tapping of a finger keeping time, the soft closing of eyes, a smile or a nod as someone passes, a loose thread hanging from a cuff. Small details but huge cues. Silence can still tell a story.

What we’ve normalized (and shouldn’t have)

I think we have normalised partial communication too much. It’s convenient to settle for simple images and sounds. Higher bandwidth and processing have made conversations seamless and painless, and there are great tools to help us integrate this with a modern hybrid life, but the human-level experience hasn’t fundamentally changed.

Efficiency has become the focal point, AI agents watching our every move to ensure our meetings are seamless and perfectly curated, creating the perfect meeting summary document. But this does nothing for true interaction and presence.

The research presented at the seminar showed that employee engagement is at possibly the lowest level since the pandemic, with “450 billion U.S. dollars a year in lost productivity from a disengaged workforce.” As Vasu noted, “When you look at just something like employee satisfaction and engagement, as a metric on its own, it feels a little soft. It feels a little squishy, but then when you connect that to how people are actually collaborating, how productive teams are, you see there’s an actual dollar impact.”

The cultural opportunity

The seminar discussed Beam as the “hero” experience in a corporate meeting, as it can blend with other video collaboration platforms by displaying images in 3D space. I haven’t seen this, but I find the corporate mix unsettling, like a compromise that uses legacy tools. I feel there is an opportunity to take this beyond and we shouldn’t settle for only a blend. This technology shows we need to reinvent, not reissue.

Its corporate application still has some way to go. Right now, it’s 1-to-1, which does not solve the boardroom meeting or even a group of people (yet). It could still represent the “hero” communication between two CEOs, politicians, or the next best interview, offering unparalleled convenience and huge sustainability benefits of an intimate meeting without any mileage or fuel requirements.

But I believe its biggest opportunity is in the public domain. Its cultural significance is huge.

My vision is to see Beam in every town and city, a destination for families, friends, colleagues, and businesspeople to converse over coffee or lunch. A business pitch, a mentor, a therapist, a rockstar experience, a doctor, teachers, siblings, parents, friends, to communicate naturally, professionally or socially. I could see a Beam booth in every coffee shop, every library, every McDonald’s. This tool has the potential to unite humanity across the globe, as the internet did.

I was thinking about my own family and how many hours and miles separate the UK and the US, making a real physical connection costly and time-consuming. With Beam, I could almost share an experience with my mum, dad, or sister for a few moments, as if they were a foot away. Just for a moment. How would it be for millions of families with space apart? Yes, I get the commercial opportunities, but I’m talking about something more fundamental, a human connection in the public domain. Not just corporate meetings, but the everyday social fabric of families, friends, and communities connecting across distance. This started my mind accelerating with the possibilities. The bandwidth required and the cost exist, but this is now a reality. The cost is insignificant.

It’s here now

I was shown a picture at ISE of a Beam shipping box. All this tech in a flat brown box, looking like someone had just wheeled it out of Best Buy. This product is in production. It’s here.

My interaction with the HP/Google team was humbling. No pizzazz, no showmanship. They’ve clearly been working and living with Beam for many years. It simply speaks for itself. I watched as people emerged from the booth after their short two-minute magic experience, in awe. It’s quite a thing to see.

For tech junkies like me, and I’m sure most of the people at the show who are there to see “tech”, we are interested in the “how.” But one thing I’m sure happened to many who left the booth was a collective wow, not about the hardware per se, but about the possibilities of where this leads.

What is the next iteration of Beam? I feel that the deep human interaction is the pinnacle to be achieved, and this is finally with us. Whatever comes next will be a complement to this achievement.

I hope they don’t mess it up.

This piece was first posted on LinkedIn on Feb. 12. Connect with Matt Pluck on LinkedIn.