The bridge between manual laparoscopy and robotic assisted surgery: Advanced Assisted Laparoscopy (AAL)
- Steve Bell
- Mar 26, 2024
- 18 min read
Updated: Jul 30
It is clear that for many sites of care, many countries, many health systems, many procedures and many surgeons - the answer is not always an Intuitive style Robotic Assisted Surgery (RAS).
There are numerous reasons why this is not always the answer such as:
Short procedure lengths don’t merit long robot set up times.
Budget - some procedures just don’t have a level of reimbursement that can support a full RAS case.
Some surgeons just don’t want to give up manual laparoscopy and learn RAS.
Some operating rooms just can’t fit a full mainframe robot in them.
Work flow can dictate that you don’t need RAS for the full procedure - and it needs to be “moved elsewhere” during the case.
Or there is just no robot available ever or on that day.
The list goes on and on. It is not so clear cut ,and black or white that something should be done RAS or should still be done as a manual lap case.
But if there is one thing that da Vinci has taught us over the last two decades - there are certain attributes of RAS that, if applied to manual laparoscopy, would help manual laparoscopy a lot.
And that should convert into better outcomes for patients - ultimately reduced healthcare costs and more efficiencies in the health systems.
What’s wrong with manual laparoscopy?
In short the answer is often “Nothing.”
For many users with no posture issues, young, early in their career, able to do the complex straight stick manoeuvres, work with backwards movements, have capable assistants, deal with 2D cameras (and moving cameras that are manually held) - they have been able to do the full spectrum of manual laparoscopic procedures. From the simples diagnostics to the most complex multi quadrant six hour procedures.
However, for the majority of surgeons (either by choice, physical status or by operating abilities) some aspects of manual laparoscopy mean that it is too demanding, sometimes too hard, and often (for them) sub optimal in terms of clinical outcomes. They are better doing a case open.
So here’s some of the issues that are often sited when people talk about how manual laparoscopy can be improved.
I’m going to describe the issues - and then delve into the technology that is either available or coming to improve on manual laparoscopy. Much of this is taking lessons learned from the world of RAS, and bringing some of those capabilities to the surgeons - but without having a fully powered main frame surgical robot in the middle of it.
Imaging in assisted laparoscopy
One of the key learnings from surgeons doing RAS is that they love the image quality - and above all the 3D image of the robot.
“But” I hear you cry, “We have 3D imaging in manual laparoscopy today.”
That is correct - but the 3D imaging systems today have one major component that causes them to be a little bit of a problem. The human that is holding the laparoscope. Let me come back to that in a minute.
2D images mean that the surgeon loses their binocular vision - and they have to learn to interpret the flat image for depth. Now for many surgeons - they soon learn that ability - and they often build on the 3D maps of their open surgery knowledge to reconstruct the 2D image mentally into a 3D map of the abdomen. Some people when looking at a 2D screen may as well be looking at a 3D screen - as they are so good at that learned depth perception. These are the best of the best.
However - not everyone has that innate ability to do this. And at the other end of the spectrum there are surgeons that just cannot reconstruct that 2D image into a 3D image in their minds. They just struggle with depth perception. And then there is a whole spectrum between those two extremes.
The problem with losing that 3D perception is when working with deep structures- say heading down to the floor of the pelvis. Being able to dissect to the limits of the pelvic floor can mean the difference between a good and bad cancer operation.
I’ve been involved in laparoscopy for 30 years - and I have (what I think) are pretty top level 2D operating skills. For me, depth perception is pretty natural when doing laparoscopy.
However - even I will say - that the second a good 3D scope goes in - even for me - I can see those few millimetres beyond where I would have stopped with the 2D vision.
It makes suturing and knot typing that little bit easier - I can always - and I mean always grab the needle with a 3D scope.
Even for the best of the best - good 3D imaging improves the skills of the operator.
So why has it been a success on the robot - and less so on in manual laparoscopy?
Part of the problem has been due to the way that a 3D image is rendered onto a flat screen (not an intuitive periscope which uses genuine binocular vision.)
Instead 3D “Open screen” systems - (or most today) use a binocular scope - and two cameras in the camera head (vs one optical channel in a 2D scope and one camera in the camera head).
The 2D image - just like your standard video camera - pushes the signal to a regular TV screen.


The 3D system takes the two images (left eye and right eye) and pushed them to a special 3D monitor. Most modern systems use a method of polarising the images. The two images are put on the screen at the same time (right eye and left eye). If you look at a 3D screen without glasses you can see that blurred mess of two images overlapping.
The clever thing about these screens is that the two images are polarised at 90’ to each other. So the left eye if let’s say “horizontally polarised” and the right eye image is “vertically polarised.”
Without glasses you see the blurry mess because both eyes can see both polarised images at the same time.
The magic is then to put on polarised PASSIVE glasses - with one eye having a horizontal polarised lens - and the other a vertically polarised lens. That means, once you slip on the glasses - the left eye can only see the left image - and the right eye can only see the right image. And as they are both offset from each other (because of the two optics in the scope) your brain interprets this as a nice 3D image.
Let me talk quickly about passive vs active glasses. Passive glasses are just like sunglasses with fixed polarised lenses.
The other type are active glasses - where instead of being polarised - one lens goes blank for a few milliseconds - on the other goes clear. (So you could shut off the right eye and let the left eye see the screen.) At that exact moment you flash up the left scope image on the screen. Then in sync - you shut off the left eye (lens goes dark and the right eye lens goes clear) - just as you put up the right eye image on the screen.
These active systems sync left and right TV images with left and right shuttering of the active glasses.
The benefit of active glasses - is that when the lens is not black - it’s almost totally transparent - so not like you’re watching through sunglasses. So the image on the screen you see is very bright.
The next benefit (important some years ago) was that when the left image goes on the screen it is the entire full resolution of the screen that shows the image. When it flicks to the right image - the right image takes up the full screen. You’re not sharing the screen with two images at once. So resolution stays high.
So ACTIVE glasses tend to have a brighter image (and historically) a higher resolution image because you used the whole screen resolution each time you splashed the image up.
The down side was - you have these flickering glasses on your face - and for many people it would induce a real nausea - and eye strain. The lenses in the eyes are actually focusing - distant to the screen for half the time - and close to the shuttered lens the other half. At a super high frequency. Hence active glasses have all but died away.
Instead most robotic systems - and most hand held laparoscopic systems that use an open screen use passive - polarised glasses.
The upside being - less ( I say less) nausea - and less headaches. It’s the same impact as just wearing sunglasses.
But the down side is that you are basically wearing sunglasses to watch a screen. So the image is about half as bright as would be seen with the naked eye.
The second issue is that (especially most older systems) lose half the resolution on the screen. So with an HD (high definition) image - with an HD screen - you are losing half the image and getting about half HD. So what you see is actually closer to SD (standard definition). (Not quite this but it’s close enough for the example.)
Instead with a da Vinci - you look into a periscope and get a full bright - full HD image in each eye from two separate screens - so feel like getting double brightness and great clarity. That’s often why - when you see single channel images broadcast at conferences from a da Vinci side by side to another robot - but in just 2D from a single feed - no sunglasses filter - and both with HD of about he same resolution. Or watching a comparison on Youtube - let’s say - you could say - they both seem as bright and clear as each other.
But that is false. Because once you view them through their 3D screens - an Intuitive 3D HD will always feel brighter and higher resolution than looking on a 3D HD passive open screen through sunglasses.
Boom- so that is why - even if you have a 3D HD camera - switching from an HD 3D screen - to a 4K HD screen is well worth it. Why? Because it will be brighter (modern OLED screens are just brighter and can be turned up higher) so will overcome the sunglasses effect more. But more importantly - you can get more of your HD image up on screen per each eye - because the 4K screen is a way higher resolution than the HD screen - in effect you throw way less of your image away.
To that end a 3D HD camera feed on a modern screen at 4K with OLED (and bigger screen) will give an image way closer to the 3D HD Intuitive image.
So advancement 1 in laparoscopy is that new advanced bigger 55 Inch - 4K - OLED high luminosity - better black point - better colour reproduction 3D screens will assist manual laparoscopy way more in terms of 3D vision.
But this is not the only issue!!
I said that passive glasses get rid of a lot of the nausea - but not all - because of a problem that is present with 2D cameras - but amplified massively with 3D hand held cameras.
I’m talking about motion sickness.
In manual laparoscopy an assistant is often holding the camera in one hand and a retracting an instrument in the other. Humans holding things in awkward positions get tired, have natural muscle movements and natural tremor.
Go watch a video on YouTube of a 2D lap manual case, and you will now notice just how much the camera image is moving around - it’s nauseating. And for some sensitive people it is really bad.
But there are not just up and down movements, or side to side movements - the scope will get rolled - and the axis will go off the horizon. Especially as the assistant sweeps left and right. The wrist naturally swings and the camera rolls.
In 2D with just one central channel - the whole thing just rolls on it’s central axis. So the effect is not terrible.
But the big problem now comes when this all happens (movement and roll, in and out) in 3D. It’s way more immersive - and people are way more sensitive to moving 3D images - and the nausea effect is amplified. That is why (in my opinion) many hand held 3D systems have never really taken off. People think the nausea is because of the “glasses” - thinking back to old active glasses.
Instead it’s the constant movements of the hand - it’s this that induces motion sickness.
Boom - so why do people love the 3D image of the da Vinci - “Because it doesn’t use glasses, and so you don’t get nausea!”.
NO!
It is because a robot arm is holding the scope - absolutely rock F’in solid. And the camera ONLY moves when the operator moves it - so doesn’t have that constant unexpected movement and motion sickness. (And frustration etc etc)
It’s the STABLE 3D that is the big thing in robotics - not if you use passive glasses.
It’s clear - because as more robots come on line with open 3D 4K screens - and stable 3D scopes (held by the robot) people love the open console as much as the da Vinci 3D - and no one is getting motion sickness.
Advanced assisted laparoscopy can rely upon stable 3D imaging systems.
Stability is one of the biggest needs in advanced laparoscopy
One of the holy grails for laparoscopy for a decade has been to have a really great scope holder. This brings that stability I’m talking about.
But most people confused multiple things and didn’t fully understand what they were needing it for.
And people could never quite justify the “benefit of them.” But that was because most people were looking at the wrong benefit. They were looking at stabilising a 2D image as well as freeing up the assistant.
The massive benefit comes in that it enables a nausea free 3D image that will rival any robot. And has no unexpected movements that cause frustration.
To that end - in advanced - assisted laparoscopy there will be a renaissance of scope holders - providing they can do a few things and do them well.
First - if they are just holding 2D scope - it will bring some advantage of freeing up an assistant and stopping some motion. But honestly the 3D vision in robotics has been hailed as one of the main reason people buy a robot. Any laparoscopists would be missing a trick if they did not get a 4K 3D imaging chain - Like Rubina - being held by some form of stabilisation arm.
That is where the big “wow” will come from.
Secondly, the interface with the arm cannot be shitty. That means you can’t just bolt it firm with a Martin arm. You can’t have some stupid voice commands - you can’t have head trackers - you can’t have pedals or joysticks. They are all a pain in the ass for any surgeon trying to keep in the flow or surgery - and have been proven to be wrong. It can’t be “stop - unbolt it - adjust it - re bolt it.”
Instead the surgeon needs to be able to grab the scope - move it where it needs to go (for gross movement) let go and it stays rock steady.
Then for the fine movements of dissection - suturing - the system needs to make small adjustments automatically to keep the action in the centre of the screen
Step up to the plate - Moon Surgical’s Maestro.

It does just that. It uses fully back drivable arms that the surgeon can move as if the arm were not attached to the scope. And when they let go - STABILITY !!
This is an enabler for bringing robotic quality of image (stable - 3D 4K - ICG) imaging to the manual lap. This combination of advanced scope holder and cutting edge 3D through a big bright 4K OLED screen is what changes the game.

Add to this some smarts - like subtle instrument tip tracking - scene analysis etc etc - and you have a pretty advanced manual laparoscopic imaging. If that can then record the case - anonymise - log - edit etc - Step in Touch Surgery et al - and now you start to build a digital ecosystem. We can talk about digital later.
Surgical Assistance
So we have now already free up one hand of the assistant. Let’s spend a second to look at what the assistant’s other hand is often doing during the case. Quite often the assistant is being a human retractor - most often in lower acuity and single quadrant cases. In others they are a little more active.
Acuity of cases will be important when we look at where this new form of manual laparoscopy sits in the spectrum of cases. If you have a mid to low acuity procedure in a low acuity setting - where it’s just difficult to just a full robot - maybe Advanced Assisted laparoscopy can bring gains to the procedure.
In a lap chole for instance - the fundus of the gallbladder is often grabbed by the surgeon - then the grasper handed over to the assistant - who stands there like a lemon holding the grasper while the surgeon gets to the business of Calot’s triangle. There are some minor adjustments - but in general it is a vast waste of human capital to have an assistant as a human retractor for 80% of a case. (Of course they do more.)
But here’s the biggie - in some sites of care / healthcare systems the surgeon will need to pay to have that assistant stood there to hold that retractor.
Many surgeons have tried clipping the retractor to the bed - tried using a Martin arm. But often - subtle adjustments to tissue tension are needed.
But in other situations - there is no assistant available.
In surgical robotics - the surgeon can simply switch to the third instrument arm - move the tissue - and clutch back into the main operating arms. That feature reduces reliance on assistance.
It’s loved by surgeons.
So how do you do that efficiently in manual laparoscopy? Again step in Moon Surgical with their second arm. That arm can be that “retractor” quite easily. And as with the co-botic help of Maestro… the surgeon can easily move that instrument - which back drives the arm - and when they let go… it stays rock solid.
See my podcast on MOON SURGICAL if you want to understand this deeper.
The key is that you now have smart assistance in laparoscopy - and gaining some of the much liked functionality of the robotic systems - gaining those advantages in manual laparoscopy - but without the giant footprint of the robot - and remaining bedside in the sterile field.
It is genuine laparoscopic assistance - scope holding and retraction - and allows the possibility of solo surgery in low acuity procedures, especially when staff shortages make it difficult to have another surgeon bedside assistant.
Haptics and force feedback
One of the major downsides of manual laparoscopy is the reverse movements through the pivot point. As for some it is a bit complex and adds a degree of learning and difficulty. But that is countered by the fact that a large degree of force feedback and haptic touch remains because the hand is directly on the instrument.
One of the big advantages of retaining assisted manual laparoscopy is that haptics are a “given.” But what we will start to see is that in an assisted world, some of the forces of those haptics can start to be measured. Moon for sure knows the tension the grasper is placing on the tissue. It’s built in to their arms through force sensors.
In system like Human Xtensions - there is smart micro robot between the hand piece and the end effector - (more on that later) - so you can even start to now measure jaw closure forces and feed that back to the user as data.
So, not only can we retain haptics in advanced laparoscopy - but we can start to get data coming back as those instruments interact with tissue. It’s starting to feel a little robotic.
Wristed instruments
Besides stable 3D vision - one of the other main reasons surgeons love robotic assisted surgery is the introduction of wristed instruments. Open surgical manoeuvres rely heavily on surgeon wrist motion. And in straight stick (clue in the name) manual laparoscopy - the wrists were lost. That means getting to the back of structures - working in narrow spaces, and above all suturing and knot typing are very hard in manual straight stick. Some can do it and some can’t.
Step in manual “wristed” and “Flexible instruments”.
Now, articulating instruments were tried in the 90’s with little success; because it was rotate and articulate. Since then advanced instruments have moved on a lot.
There are mechanical flexing instruments like FlexDex. Intended to make using articulation - flexing more natural and simple. Reducing stress and strain on the user.

You then have mechanically fully wristed instruments that mimic robot wristing such as Artisential ,which means that as you move the hand piece you get direct translated wresting. Real pitch and yaw - just as you would get in a da Vinci.

Or there is Human Xtensions which decouples the end effector from the handle by a small but smart computer drive unit. This also gives you fully wristed motion but via a “fly by wire” system.

All have pros and cons and differing price points per case. But there is an ample choice of next generation wristed instruments that can advance laparoscopy.
All three give you different ways to get to that much loved wristed instrument that enhances suturing and complex dissection.
Smart instruments
If you now move to the higher end staplers, advanced energy devices and hand held devices such as Human Xtesnions - you can also start to get rich data from the smart devices as they give assistance.
Smart staplers like Echelon by Ethicon or Signia by Medtronic have built in firing mechanisms and tissue monitoring functions to help with tissue management. They also have powered articulation and firing sequences - all designed to assist in laparoscopy. Reduce the stress and strain on the user.


Some of these features level the playing field between male and female surgeons - by having a smaller hand grip needed - and eliminating the need for strong firing sequences that favour the stronger male surgeons.
As these instruments start to gain smart “wifi” connection - they will be bringing that data all back to the user via suites of apps.
Adding smarts into the instruments makes them easier to use - easier to fire - gives better more uniform staple lines - self governed energy delivery etc. All things you will find in modern surgical robots - but with these benefits being delivered manually. These instruments start to bridge the gap between laparoscopy and robotic assisted surgery - creating an entirely new surgical segment.
But the technologies need to be used together to maximise impact.
Data / Digital
One of the cherries on a robotic cake is the rich data set that it throws off about hand movements - visual data - forces - stapling - energy - vision etc.
In manual laparoscopy for decades all of that rich data has been lost.
And that data when linked to clinical outcomes can give insights to help surgeons improve - healthcare systems to be more efficient and patients get better outcomes.
In advanced laparoscopy - using say Moon Surgical - Human Extensions - and combining that with one of the many smart imaging systems that are emerging such as Touch Surgery - or some of the integrated smarts coming from Olympus and Storz. Well you can now start to build up a meaningful data set from manual laparoscopic surgery. You can even get guidance and advice - insights and data to match to outcomes.

One issue today is that much of that data is disparate - and the systems don’t talk to each other - so companies developing the apps for the assisted and advanced laparoscopy must make them open - with simple APIs so that all that data can get rolled up. Some of the companies are already working on it now. And we are starting to see a rich data tapestry in laparoscopy - advanced assisted laparoscopy.
With this - the procedures in manual lap can start to feel as rich as those in robotic surgery.
Single user control. (Smart ORs)
As we have recently seen with the release of the much anticipated da Vinci 5 - the surgeon being able to autonomously control much of the OR function is a key to unblocking efficiency and work flow.
What we are seeing with smart ORs - such as OR1 from Storz - or Endo Alpha from Olympus - is also bringing those smarts and ability to control to the laparoscopic user. Pre sets - video controls - lighting - insufflation - energy is all now starting to be controlled by voice commands directly to the smart ORs.
This will be essential if manual laparoscopy is to match the convenience of a surgeon at a smart robotic console.
Smart ORs have the possibility to bring in pre operative imaging - understand work flow - have scene setting - video recording and analysis. And all of this can be connected through apps.
The smart OR will be one of the leveller’s for the advanced laparoscopist - and could also start to be the hub for all the data of open - lap and robotic cases (a much needed integration if we are to even better understand a broader world of surgery.)
Into this will also creep proctoring - guided assistance - smart assistance and advanced image analysis. The smart OR will be a central hub that will act like the smart tower of the robot. And if users embrace this - use it to its full ability - they will get rich help, insights and understanding from all the data. As well as improving efficiencies.
And all of this being built into the walls will reduce “footprint” which is essential as you move to smaller operating rooms.
Summary
Today there is standard and well practiced manual straight stick laparoscopy at one end of the spectrum.
On the other end is full main frame robotics - Intuitive style with systems like the new da Vinci 5.
But in the middle is a growing field of advanced assisted laparoscopy - where robotic like technology, benefits and digital products can start to mimic some of the core elements of RAS.
But by retaining the surgeon bedside - it allows that much needed - reduced footprint and that improved workflow by negating the need for the actual robotic set up. Ideal for shorter - lower acuity procedures where maybe full on RAS is overkill - or where there is no robot - in locations such as ASCs or private clinics.
It is the zone in the middle between manual lap and RAS where smart assisted laparoscopy could find its place. A place where millions of procedures could benefit from smarter workflows -smarter laparoscopy - removal of scare and expensive assistance - and way more.
As we move forwards and people start to stack these technologies together - they will find s place for standard manual lap - advanced assisted laparoscopy all the way up to full robotic assisted surgery. Each will have its place in the spectrum of patient care.
These are opinions of the author for educational purposes only.
Comments