A New Frontier in the Operating Room: Real-Time Histology for Robotic Surgery
- Steve Bell
- 5 days ago
- 7 min read

Imagine this…
A surgeon sits at the next upgraded surgical robot (just sayin'). They are operating on a cancer patient - flicking between modes of advanced vision. Through AI assistance the robot has helped them identify the area of the tumour and the surgeon and “assistive” robot are zooming in on it. The surgeon starts the tumour resection.
Imagine the surgeon is in the middle of the tumour resection, and they’re trying to decide - should I peel off another millimetre here? Should I stop? The ideal would be to sample the tissue and understand if all the tumour was removed. Because take too little and you risk recurrence. Take too much and you could aggressively affect function of the patient. This is the fine balance that precision surgery can deliver. But to do that you need histology to tell you where you are at.
Traditional histology offers an answer, but only after tens of minutes (or more), and only if you can get fresh frozen sections. That delay introduces wasted anaesthesia time, uncertainty, possible under-resection (leaving tumour behind), or over-resection (damage normal tissue). In robotics the arms are precise, but they aren’t smart. They still wait on the surgeon’s decision. (Today...)
But what if they had real-time histology feedback, the robotic system becomes part of a closed loop: “you interrogate, you see, you act “ instantly - rather than pulling out, waiting, coming back.
In short: real‐time histology is the missing microscope inside to guide the surgeon's or the robot’s hand.
For many cancers, margins are everything. A positive margin means recurrence, more surgery, and worse outcomes. But we still rely on techniques that were born in the era before robotics: frozen sections, sending slides out, waiting for path. That friction is the drag on innovation. Eliminating it injects improved OR flow, more confidence, and ultimately for the patient - safety.
A little background
For those not familiar with how the histology works I’ll give some background here:
The gold standard is still formalin fixation + embedded sectioning + staining H&E (Hematoxylin & Eosin, immunostains). This gives exquisite detail. But it takes hours (or days), and is incompatible with intraoperative decision.
To get faster, we use frozen sections - cryostat cutting, quick stain, path review - intraoperative. But that still eats ~20–40 minutes, needs a dedicated pathology lab and staff, and may distort tissue. (Not everyone has this available for all procedures.)
Some centres digitally scan whole slides and send them to remote pathologists (telepathology), but you still wait on cutting and staining.
The new wave is slide-free, optical, or chemical methods: you image or analyse the tissue (fresh or in vivo) and get a diagnostic read in seconds to minutes - no glass slides, no embedding, (and importantly) no wait.

These new methods can be grouped roughly by how they “see” tissue:
Optical imaging that gives cellular contrast (confocal, OCT, Raman, fluorescence)
Chemical fingerprinting (mass spectrometry, molecular probes)
Hybrid or augmented systems that combine modalities
And then there is in vivo imaging (you don’t even remove tissue) vs ex vivo imaging of specimens or surfaces (margins, biopsy cores) while in the OR.
State of the art
Optical and slide-free histology
Stimulated Raman Histology (SRH)This is molecular vibrational imaging. You shine lasers, get contrast from chemical bonds, and render images that look like H&E. The big player is Invenio Imaging with their NIO platform. They already deploy in neurosurgery; their AI tools are growing. You can go from fresh tissue to a digital “histology slide” in under 3 minutes. No cutting, no staining - yet pathologic detail.
Because there’s no slide, you preserve the specimen for downstream tests. That’s a huge win. As many histology techniques are destructive to the specimen.

Confocal / fluorescence surface scanning
Companies such as SamanTree (Histolog) use fluorescence contrast plus confocal scanning to image the exposed surface of excised specimens. The idea is: before the specimen leaves the OR, scan all surfaces and instantly flag suspicious spots. In breast surgery, studies show a reduction in re-operation rates.This is excellent when you don’t need penetration depth but care about lateral margins.Confocal microscopy is a way of imaging tissue that gives you crisp, high-resolution optical “slices” without actually cutting anything.
Here’s how it works in simple terms:A focused laser scans across the tissue, and only light from a very thin focal plane is detected - everything above and below that plane is filtered out. That means you can see real cellular detail at specific depths, layer by layer, almost like a virtual biopsy. In surgery, confocal endomicroscopy uses tiny probes to bring that same principle into the patient - so you can view living tissue architecture (cells, glands, fibers) in real time, right there

Wide-field OCT + AI for margins
Optical coherence tomography (OCT) gives you micrometer‐scale cross‐sectional “optical slices.” Perimeter Medical is pushing this hard for breast lumpectomy margins. Their S-Series is in use now; their B-Series (with AI for cancer detection) is in pivotal trials in 2025.The advance here is coverage: you can scan a whole surface, get maps of likely tumour, and guide your resection right then. Not one field, many fields.

MUSE / UV surface excitation Microscopy with UV surface excitation (MUSE) is an elegant slide‐free method. You excite the tissue surface with UV, get contrast that mimics H&E, and read directly. It doesn’t damage deeper tissue and gives rapid results. This is more at the translational/early-commercial stage, but promising.Some groups also offer “optical sectioning” variants (InstaPath type) that give you shallow depth images almost like you had cut a thin slice.

Lechpammer, M.; Todd, A.; Tang, V.; Morningstar, T.; Borowsky, A.; Shahlaie, K.; Kintner, J.A.; McPherson, J.D.; Bishop, J.W.; Fereidouni, F.; et al. Neuropathological Applications of Microscopy with Ultraviolet Surface Excitation (MUSE): A Concordance Study of Human Primary and Metastatic Brain Tumors. Brain Sci. 2024, 14, 108. https://doi.org/10.3390/brainsci14010108
In vivo (or in situ) microscopy
Probe confocal / laser endomicroscopy ZEISS’s CONVIVO is a surgical confocal probe used in neurosurgery. You bring the probe to the brain, see cellular patterns live, and make judgments on tumour infiltration. This is already FDA cleared.In GI, lung, head-neck, etc., systems like Cellvizio (Mauna Kea) let you see the mucosal or submucosal layer in vivo with microscale resolution. You inject fluorescence dyes, bring the probe, and interpret what you see. Tiny field of view, but immediate.


As a note: These tools won’t replace resection; they guide you where to cut or check margins on the go.
Chemical / spectral fingerprinting
REIMS / iKnife (electrosurgical mass spec)
The “iKnife” concept: hook your electrosurgical device to a mass spectrometer. When you cut, the heated aerosol is sucked into the mass spec, analysed in seconds, and the system classifies tissue type (tumour vs. normal).It’s very sexy. There are strong academic data across cancer types. But bringing that into OR workflow (mass spec footprint, calibration, robustness) is nontrivial.But as we see robots getting “smoke extraction” systems - I do start to wonder how small we can get the detectors? Hmmm maybe some wild fantasy - but…
MasSpec Pen / molecular touch
This is a handheld pen: touch the tissue surface for a few seconds, the device sucks up molecules, runs mass spec, returns classification (tumour vs healthy). Early results in pancreatic, ovarian, other cancers are favourable.The challenge is fitting it into real world sterile workflows, speed, and integration with imaging or robot arms. But imagine if a robotic end effector gets one of the se pens and we can make it work. It will start to give even more info to the brains of the robot… and well we all see where this is going right?

What all this could mean for robotic surgery
My wow Monet is this: all these modalities become yet more sensors in a robotic system. You don’t just see anatomy (via camera, fluorescence, depth) - the surgeon (and the robot AI) see actual pathology. The robot (or the surgeon through the robot) can then adapt mid-course. Initially I imagine you will get visual guides of “cut to this line” - but then of course with “auto resect” the robot could do a better job of following the tumour margin boundary as set by the pathology sensor - in combination with the millions of cases ion outcome data from the robotic hive that has trained it. You get where this could go? The ultimate robotic precision resections with real time margins.

Picture this workflow:
The robotic arm holds a tumour; you ask (yourself or maybe the robot), “is that margin clean over there?”
You deploy a confocal probe, OCT beam, or mass spec pen to that area. And the robot brings it in and the sensors kick in.
You immediately get a read out (normal vs tumour).
If it's tumour, the robot advances by 1 mm and cuts.
If normal, it stops.
Rinse and repeat.
You’ve created a closed loop intraoperative histology + actuation system.
Combine that with advanced computer vision, tissue segmentation, augmented reality, and force feedback, and you get robots that don’t just cut on preoperative plans but adjust on the fly to tissue microstructure.
Also, many of these modalities overlay naturally into robotic instruments. The same endoscope, same boom, can carry OCT or confocal modules. This reduces instrument swaps. Robotics companies are already putting in multimodal vision systems (white light, NBI, fluorescence, 3D stereo). In my mind the next generation is optical histology paired with those modes.
Because robotics is inherently modular and sensor-rich, I think it’s the perfect host for real-time histology. The day you can “see cancer at the cellular level through the robot’s optics” you shift from gross resection to precision oncology.
These are just speculations and musings by the author for education purposes only.