What's next in soft tissue surgical robotics - Part 1- Main Frames (Exclusive Content)
- Steve Bell

- Jan 9, 2024
- 17 min read
Updated: Jul 30
It's an exciting time in surgical robotics. As the former CCO of CMR Surgical - I know this market very well. It was my job... Read this post for my predictions of what exciting changes are coming in the next 24 months including daVinci Gen 5. Get a subscription to read all exclusive content on surgical robotics, AI, new robotic systems, Instruments, vision systems for 12 months @ just 19.99
Currently there are a few formats of surgical robots on the market. From the large "Main frames" such as Intuitive's daVinci - or Medtronic's modular Hugo - or new entrants like Medicaroid. There are also "lighter" - low acuity systems such as Distal Motion or Moon Surgical's Maestro. Scaling all the way down to mini format robots such as Virtual incision and Vicarious surgical. Today I'm going to focus on the main frames and what I think is potentially coming to main frames in the next two years - changes to take them to the next level. I have no inside information, and these are just speculations based on seeing the direction of travel and understanding the demands of the market.

What you see is what you get... Vision
One of the key limits today in all surgery is the rather limited capability of the human eye. We see a fairly narrow spectrum of light. If you make that 2D, narrow field of view and low resolution - you further impair the surgeon. To make the best optical based decisions (critical for surgery) there needs to be 3D and, even better, stable 3D (Which the robots give all day long).
But honestly, today robotic camera systems, scopes, and screen are all a little bit backwards compared to current state of the art in 2D laparoscopic images by companies such as Storz or Arthrex.
The issue today is that most robots are using older, outdated HD systems. And if that is coupled with older 3D HD open screens - you can have a fairly low resolution image that comes into each eye (and sometimes through glasses that darken the image.). I imagine that as 3D systems start to catch up with the state of the art 2D systems, we will see an improvement in vision at a baseline level. Putting two large format chips (you need 2 for 3D) into the cameras - and then equipping those cameras with other chipsets (often up to 4 large chips) is a problem of bulk and other technical issues. Well it has been to date. I think we will see these problems resolved in the next 12 to 24 months, and robots will start to get 4K - 4Chip cameras in spectacular 3D. This will be combined with 4K (I mean genuine 4K) screens and 4K ready scopes (controversial but a bad scope - with bad light, small diameter lenses will downgrade the image.) So I imagine that as we get the entire vision chain up to true 4K - we will see this year and next year some 4K 3D vision chains burst onto the robotics market. This will be a huge upgrade as it simply brings greater clarity, tissue definition, colour rendition, contrast and illumination. It brings the vision system up to what the human eye expects to see.
Beyond the "visible spectrum" some upgrades we will start to see will be considered "beyond the human eye". Currently the fluorescence systems (Near Infra Red) are focused on exciting and illuminating ICG (Indocyanine green). Actually an old molecule - but when injected and then excited, the right camera can pick it up and then process that image as an overlay. ICG can help with controlling that blood flow is still reaching tissue edges, biliary tree anatomy can be "lit up", as well as lymph nodes.

However on robots today such as daVinci, the system is still the older style "Firefly" which is an older Novadaq style technology using black and white images with green overlay. It looks sort of like night vision. I predict that the more current and advanced colour overlay ICG systems (available in 2D today such as Rubina) will come onto various robots in the next 12 to 24 months.
Update 24th Jan - I suspect daVinci Gen 5 will have full colour overlay ICG in the next launch
This will allow multiple vision modes with colour or black and white anatomy - and varying colour overlays, colour maps of the excited agents like ICG.
I also see a lot of research going on into different markers that can be illuminated this way for nerves, or prostate tumour cell etc. I know Intuitive is already leading with some markers here. So I imagine there will not only be an upgrade in the vision systems for fluorescence, but also a much wider range of markers available for different clinical indications.

All good - BUT. You still need to inject the ICG, or other markers. They have certain half lives, and they can also leech out of target tissues and start to become (with time) a bit of a fuzzy mess. Instead, one of the goals in imaging is to use either laser light, or different spectrums of light to be able to see and amplify subtle differences in tissue colours that the human eye cannot see. Active Surgical has one such system - with its Active light system that can show real time blood flow in tissue (as an example) with no need for dyes.
Also coming (and yet to be clinically proven) are a range of hyper spectral imaging modalities. Where different chips, with different excitatory lights will be able to illuminate different structures, amplify those difference at different wavelengths without the need for injected dyes.
So in one mode you could excite all the nerves and they stand out, In another you can see better all the blood vessels, in another... tumour margins and so on. There are some technical challenges with depth of penetration of the light into tissues - but these vision systems are coming. I think it will be at the later range of 24 months - but advanced imaging will gain traction.
Finally, I predict the work that companies like Asensus and Moon surgical are doing on advanced image recognition within the surgical image will start to be refined. Companies like Medtronic already have their GI AI based imaging and it won't take much to transport that across to say HUGO. These advanced image recognition systems (especially if combined with advanced viewing modes) should open up a host of advantages for the robot. (Why the robot?) - The stable image! Having the robot hold the 3D camera just helps when systems like NVIDIA are processing images. The less pixels changing per second allow the image processing systems to work better. (From what I've seen). Even small hand movements by an assistant holding the scope will cause whole frames to change and that requires more computer GPU power - and could reduce accuracy or increase lag in the process. A rock solid image held by a robot should theoretically just be better.
Once we have those images with combined image information - the robots will start to be able to to do some magic with their embedded software - that relies upon that data. (More later). But I do predict that more intelligent image processing is coming to the robots (it's here in a few now) and that will improve quite rapidly over the coming 12 to 24 months. Other robots will start to introduce it into their image offering as we move forwards.
Cockpit changes in surgical robotics
Overall footprint of the system (not just the boom for the arms) is critical in the operating room. And with size often comes mass. Incredibly some of the systems have a combined component weight of over 1.8 Metric tons. And some of them have that weight concentrated in a relatively small area of the floor.
That means that floors must be specifically rated for many systems and even in extreme cases reinforced. Storage of the system can be a challenge (Think small ASCs), and also the bigger the bulk, the less incentivised teams are to move them from OR to OR (even if they can - as some floors may not be rated.)
One of the bigger and heavier components of all systems - not just big mainframes - is the cockpit or console. The place where the surgeons sits to teleoperate the system. In a daVinci a lot of the bulk comes from the two screens that are embedded in the console, in others it can mechanical systems like the lift mechanisms. All that weight up top and the stability of the base needs to be increased to keep stability.

Having seen some recent and newly designed systems - it's clear to me that system consoles will continue to get a little lower, smaller and lighter. The lower setting should allow a lower centre of gravity so less bulk, weight, size needed in the base of the console. Combine with that much thinner and lighter 3D viewing systems (passive screens, active screens, dual monitors) a byproduct of the rapidly evolving consumer advances in screen technology - and we should see smaller and lighter consoles. Or even consoles that are completely collapsable for storage and movement.
Some companies may boldy try and move away from the screen all together - but I'd throw my caution there - headsets worn for 5 hours are not for everyone. For many reasons from comfort to eyestrain to sweat in the headset. All VR style headsets would need to have see through capabilities (So a mixed reality) as it is getting clearer the preference is to have a vision system that "keeps you in the room." With the team.
I also predict that the switch gear used today for foot pedals, hand switches will get lighter yet more robust - shaving important grammes to kilos off console weighs. Haptic arms will get lighter and smaller as haptic motors get better (even with gravity compensation and haptic force feedback). But all these refinements - lighter PCBs (Printed circuit boards) - cooling systems needing less heavy fans as chips become more efficient etc. It should all converge to a smaller, more "Compact-able" (is that even a word?) consoles will emerge in the coming 12 to 24 months. I do predict the daVinci will be the first to have a major redesign in Gen 4.5 or 5 of the daVinci.
Update Jan 24th - daVinci will be Gen 5 and will have significantly upgraded and less bulky Console by a major change in their screen technology
But not only will consoles get smaller and lighter - they will be a little bit better. Touch screens to be able to do a host of actions - from system set up to OR management - to bringing CT images in will improve. They are already here - but there will be a massive upgrade in what those additional screens or main screens will be capable of. They will bring a lot of benefits to the surgeon in the cockpit. The systems will also be paired up with the main robot and the OR team. Giving information to the surgeon and allowing the team to supply information back to the the surgeon - and pumping it all up to the cloud for peri and post surgery insights. It's happening today - but this will be an area of functionality improvement. Smart companies will maybe have a firewall between what is "medical" software and what is not. You can see this already in many of the digital products people are bringing to market.
And all of this will be capped off with tweaks to the ergonomics and comfort. Some may even get rid of the haptic linkage arms and have Oculus style controllers - but that will be debatable if in a 5 hour operation you want to take the weight of the controllers - or (as happens today with Gravity compensation) let the robot take the load. I heard Asensus is bringing them out on Luna.... so the market will let us know quickly.
Haptics
I strongly predict that we will see the evolution of haptics in systems. I say evolve because companies like Asensus already have haptics (of a sort) in their Senhance system. but I predict the main player, Intuitive, will bring out the first meaningful and useful haptic system.
Haptics is a very complex science and generally misunderstood - and I won't go too deep today into haptics - but will do another post just about this subject.
Needless to say that when a surgeon moves from open surgery to laparoscopic straight stick - they lose some haptic senses. "Is the tissue wet to touch?" "It it warm?". But with laparoscopy they still retain (via a direct mechanical linkage of the user to the lap instrument) some haptic senses. Force on the tissue (all be it through a pivot point). How hard or soft tissue is, how it feels when you dissect it.
In robotics those physical hand haptic senses are lost. As there is not direct technical linkage between the instrument (end effector) and the surgeons hand. There is a robot in the middle. It is true that surgeons learn to compensate (like video game players) and develop what is often called visual haptics. Where just by looking at the screen they can "feel" what is going on inside... they can "understand" what tissue is like by watching how a dissector prods it. But the skill level to get great at that varies between users. Some get it fast - some never get it. And many surgeons have lost of open and lap experience to draw muscle memory from. But a lot of new users won't have that to fall back on; so learning curves may be longer and error may occur as they develop this visual haptics skills.

Instead, a theoretical holy grail of robots is to give that sense of touch back to the surgeon. Many systems today have sensors in the robot arms that can detect forces, , stress and even closing resistance of end effectors. Many also have back driven haptic arms with motors in them. So a few robots already have the inherent hardware in the system to be able to apply haptics. But so far only Asensus has dared to do it. Why?
Part of the answer is that bad haptics is worse than no haptics. You can implement haptics in a thousand ways, and it can give a user a connected good experience or even false experience. Let me give a risk scenario. The software written to enable haptics has some kind of multiplier in the algorithm. And you think you are applying force X to the tissue. But the scaling makes you actually apply 10X the force - and rip the tissue.
Or in another example, there is a terrible lag between what you feel and what you see. You see that you are pulling on tissue - but you feel it a few milliseconds later - and there is always a disconnect between the hand and the eye. That's bad haptics.
I predict that Intuitive - that has been working on this for over a decade - has to some degree cracked it. For certain instruments they may well be able to now give good and meaningful haptic feedback to the surgeons. Does this warrant a "generation 5" title for their system? Maybe. But I predict in 12 to 24 months there will be a daVinci released with good haptics. I imagine that would allow a premium on this end effectors, and maybe even the console. What would make sense is a console upgrade program to a lighter, tighter console with haptics. Plus a tiered instrument strategy with low functionality instruments at 30 lives - mid range instruments (current Instruments) that are higher in functionality - and then a premium range of high functionality haptic instruments (maybe even with tip sensors?) This will not be for everyone, and all cases, but I also predict it will be a constant uptake that will become the gold standard in robotics within five years.
Advanced instrumentation
Talking of instrumentation; there will be raft of companies coming out with staplers, multifunctional instruments and advanced energy of various flavours. It is clear that Medtronic must get their stapler on the robot, and their Ligasure. This will have many challenges - but if they wish to stay in the robotic "arms race" (pun intended) - they will need to have a fully capable robot. They have the staplers - and the energy devices in the cupboard - so I predict 12 to 24 months max we will see the first iteration of their advance instrument suites in limited launch - on HUGO.
Other companies such as Asensus and Medicaroid - to name a few - will either have to develop their own staplers and advanced energy - or move to partnerships. The propriety "" approach to robots already means that Medtronic & JNJ (when they bring Ottava) will create a fourth hurdle to Chinese stapling companies by not allowing anyone else's stapler to work on their systems. So that means you will have other companies out there with no staplers - and stapler companies with no robot. To me it is clear that several of the robot companies will need to team up with a Panther or Lexington or other stapler manufacturer to get a stapler on their robot. I predict a few partnerships might be formed for staplers and advanced energy delivery devices (of various flavours) - but I'm not sure that any of them would necessarily beat Medtronic to market this year.

To defeat its leading position - I also think that Intuitive (as they already are) will up their game in advanced instruments. Expanding their Sureform stapler range - improve software control as data floods back in from their fleet of installed robots (one big advantage they have) - plus they could expand to circular staplers as well. I often wondered if they would ever also come out with a hand held range that plugs into their "computer." Maybe fanatasy - but if someone likes their Sureform - but wants to use it off robot - a small hand held drive unit that plugs into their tower and munches through their reloads? Add that to their hand held camera (which they have) and well now Intuitive starts to creep into the lap market and defend (in an offensive way) Medtronic and JnJ offering all their options. Same could be for the energy devices as well - why not a hand held version of Synchroseal for lap or open?
On their advanced energy devices - I predict they will not stand still. Synchroseal is a great idea - but the personal feedback I saw in the OR was not as great as their Vessel Sealer Extend. Which I saw get even better favour with their new energy generator that seems to have given it a good performance bump. The issue with Synchroseal that I saw was "hit and miss" tissue division. Vessel sealer extend uses a cold blade - but that makes it a longer instrument past the wrist. The Synchroseal uses a hot electrosurgery "blade" - so can be shorter. But tissue division is never as clean with a hot blade. That was seen in Tripolar systems back in the late 90s.
I predict they improve Synchroseal - and get the cutting faster and cleaner - either by a slight redesign or by tweaking the energy algorithms. I imagine they might also come out with some different lengths and potentially tip profiles for different clinical applications.
Update Jan 24th - I am predicting more heavily that a select range of full haptic instruments will launch with Gen 5 daVinci
Help with workflow - setup & teardown
If you look at one of the critical needs for hospitals as we move forwards with healthcare. It's efficiency. And as you move to lower acuity procedures in the less critical care settings ASCs, OPDs, Private Clinics that becomes more critical. Throughput on short cases like a 40 minute lap chole, or a 1 hour inguinal lap hernia does not allow for the extra time for robotic setup and tear down. It also doesn't allow to have robots sitting idle in an OR " because that case doesn't need it." The robot needs to be a hard used asset in the financial reality of an ASC - so mobility, size, ease of setup - tear down - needs to be improved in all systems.
This is a complex subject as it is not just about positioning and draping the robot. Let's say you want to turn 2 hernia - 2 lap choles - 2 small gyn cases - a hysterectomy - and a reflux procedure in a day. That would be 2 specialities sharing the robot - maybe an am robotic gen surg session and a pm robotic gyn session. Turn over time in cases could be as short as 1/2 hour to get that work load. If the general surgeon needs to sterilise 4 cameras and they all go off site - and the gyns needs 4 cameras (that need to be sterilised) - then you need 8 robotic cameras. A few scopes per camera (0' and 30') and all the robotic instruments for the cases sterile and packed. Just having enough sterile equipment on the shelf and ready could become a limiting factor. Especially if the ASC or clinic has to ship products to a centre sterilisation service.
Companies will have to start to think very differently when they move to a super high intensity - low acuity case load setting. I've personally seen it in India. Scopes need to be available and not sitting in sterilisers - instruments need to be steam sterilised in quick turn around environments - not needing shipping to sterilisation departments an hour away with specialist equipment. Companies may need to have lower functionality but disposable product as part of the offering to just make logistics work. Single use - lower quality - lower cost instruments that can make efficiency work in certain settings? The Medtronic - "use Storz straight stick where you can" - maybe that's what they are thinking?
What I predict is that either the big mainframes will need a variant for the low acuity settings - with specific tools for that work flow. Or systems like Moon and Distal will creep in and "less capable but good enough" robots will start to take that big caseload. (More on those systems in another post.) I'm not sure what will emerge as actual product in the next 12 to 24 months - but I do see a focus on change of sites for robots, and lower acuity procedures being targeted.
Embedded software will evolve
This type of software is what is inside the robot to make it work. I think it's going to get smarter and better, and there will be lots of iterations of the software to improve system capabilities and per4formance. I'll pick just a few functions where I see a need for improved software in the systems. The first is collision avoidance. Arms clashing outside the patient, is a real problem - and especially as more complex multi quadrant surgery is performed. The software in the arms needs to start to allow the tip of the instrument to remain where the surgeons wants it - but the arms dance in a coordinated way outside to give maximal room between them. To avoid these clashes. This can be done with the system software, and if it does it will ultimately allow the robots to have trocars closer, to allow movements around quadrants without clash. This would be a massive gain for surgeons and bedside team; and reduce a lot of "downtime" sorting clashes during cases. As more and more data floods back to the head offices of all the companies - their software engineers will be able to analyse and start to improve the embedded software. All based on real world data. (but you need cases to get that mass of data and Intuitive has that by the boat load.)
I also think that the software will start to do more intervention and more "Assistance" for the surgeons. As we get vision system improvements we can feed that data into the robot and say "don't go here with a scissor" - the mythical no fly zones. I think we are close to this - I think image analysis and tissue boundaries are getting close and perhaps Asensus will be the first to implement this? I predict some form of exclusion zones or no fly zones will come within 24 months into commercial systems.
Update 24th Jan - I'm getting stronger sensation that the Gen 5 daVinci will have a much more sophisticated clash management software
App software
As I've said many times - having a connected data suite for the users, the hospital and the companies to "understand" everything - especially the clinical performance by tying into patient records - will be table stakes. If you cannot supply a free set of tools that give data back - you will be an outlier in the robots field. Today a lot of info is already given back in great apps by several companies. The next 24 months will see refinement of the data sets and the dashboards that present that back to all the stakeholders. But with the explosion of Chat GPT - and "AI" I think what we will see are "Insights". Data analysis will start to be able to pick up robotic trends that will give huge insight to the entire user base (in days not years). The apps will start to push out valuable insights gained from the hive of data - digest that data - find trends and insights, and then push that out to individuals with tailored help plans - "Do more simulation on suturing..." "Use the needle driver in your left hand for those passes" and you'll get better results.. The apps over the next 24 months will start to be advisors, not just information aggregators and data displays. And this will lead to better outcomes, and that will convert into cost savings. A key metric for robots.
Summary
These are just key personal insights and projections. And today it was a dive into the big "main frame" robots. Keep coming back for more, deeper insights into MedTech as I continue my series of blog posts on the future of surgical robotics.
Check out the deep dive into Robotic Instruments to get more insight into surgical robotics and upcoming advances






If you have any particular subject you want me to deep dive into. As a subscriber you get the opportunity to give me suggestions. Please do.