How AI tools could make physician ownership more viable

Advertisement

As independent physician practices face mounting financial and administrative pressures, AI-driven solutions are emerging as a powerful tool to level the playing field. 

Jeremy Shiner, founder and CEO of Myriad Systems, a technology and software company that provides AI-driven practice management solutions, joined Becker’s to discuss how AI is making independent practice more viable than ever.

Question: How are AI-driven solutions, like the ones your company is developing, helping make physician ownership more viable?

Jeremy Shiner: For me, it comes down to two major areas. The first is revenue cycle management, which is arguably the most important, and the second is clinical documentation and administrative work.

I’ll start with revenue cycle management. Given the paradigm we just laid out — with costs going up and payments going down — if we can leverage not just AI but also automation and psychology (which we’ve been doing for some time), we can drive more payments at the time of treatment.

I learned a lot of these techniques from the early days of payment processing in healthcare practices from my father’s company, Rectangle Health. If we can implement strategies like real-time insurance card scanning, eligibility checks and algorithms that accurately estimate patient portions — ensuring transparency and giving patients options — it empowers them and makes them feel in control. By using live animations and illustrations to explain benefits in a digestible way, patients are more likely to pay at the time of treatment.

If they can’t pay their estimated portion up front, we can use custom Visa/Mastercard-approved pre-authorization to store their card in a compliant way for future insurance balances. This ensures transparency and communication, so instead of receiving a bill in the mail and ignoring it or getting sent to collections (which, as we discussed, is harder than ever), patients either pay today, guarantee payment, or indicate that they can’t do either — allowing the office to address the situation proactively.

Q: This seems like something hospitals might not be nimble enough to offer, I would imagine?

JS: Yeah, a lot of hospitals don’t surprise me in this regard. Some of these previously physician-led practices — some of them even my own doctors — have been taken over by big hospitals in Westchester County. These hospitals have some of the most archaic payment systems.

Many of them use Epic for their EHR, which is an incredible system, but they still have a patchwork of different systems for various functions. That’s often because they’ve inherited some systems from the doctors when they acquired the practice while still maintaining their own legacy systems. Even they struggle with integration.

So I think we’ll see an evolution on both sides — within private practices and in hospital technology — where sophistication and automation are starting to be implemented. Surprisingly, despite their vast resources, hospitals don’t always have these systems in place.

Q: How does this AI make independent practice more appealing compared to hospital employment? 

JS: Many providers report that anywhere from 20% to 50% of their time is spent on clinical documentation, depending on their specialty. Of course, there’s variance between specialties, but looking at general self-reported measures, this is a huge issue. With MIPS, meaningful use requirements, malpractice lawsuits and the scrutiny placed on clinical notes, documentation needs to be robust and accurate. At the same time, it takes a tremendous amount of effort, and insurance reimbursement rates simply don’t justify that level of time investment.

From a business standpoint, this is a major factor pushing providers toward management companies or hospital systems that can offload that burden — or at least afford to hire scribes for them.

On the documentation side, our system offers generative charting, where providers use templates, and the system writes the entire note based on their inputs. There’s also chat-style charting, similar to what you might have seen with ChatGPT — but in a closed-circuit environment. Providers can type in information, add patient forms, past notes, history, macros, and codes, and the system generates a complete narrative, automatically mapping it to the appropriate sections, such as a SOAP note. It can even suggest codes. What used to take an hour and a half per encounter — depending on the provider and their charting style — can now take just five minutes.

Beyond that, there’s ambient listening technology. We’re not the only ones using this, but essentially, a microphone in the office transcribes the entire appointment in a HIPAA-compliant manner, turns it into a narrative format, and maps it to the provider’s chosen note structure — whether that’s H&P, SOAP, or another preferred format.

What’s unique about our system is that it’s fully closed-circuit. A lot of AI platforms popping up now generate notes that providers have to copy and paste into their EHRs, but our system keeps everything internal. We don’t pass data to a third-party LLM — we host a medically trained model on our own servers and continuously train it. This ensures constant improvement and customization based on provider preferences.

It also understands patient demographics and past health information from intake forms. If a patient fills out their intake on an iPad in the office or remotely on their phone, our system pulls that data and turns it into real-time narratives that the provider can integrate into the note.

Since 30% to 50% of a note is often subjective — such as patient history, allergies, medications, and chief complaints — our system can automatically generate that portion. The provider then enters their objective findings, presses “Generate,” and the full note is completed.

This is a game changer — allowing providers to document efficiently in a fully HIPAA-compliant way without passing patient data to third parties. We even have two patents pending on how we use data and our algorithm-driven prompt system. Instead of relying on broad internet-trained AI models that risk hallucinations, our system operates solely within a controlled dataset.

For example, in mental health, some AI systems might infer dangerous assumptions — like turning a note that mentions “patient shows signs of depression” into “patient has suicidal ideation.” That’s a serious risk. Our model is designed to avoid those pitfalls by contextualizing data properly, ensuring accuracy while still reducing administrative burden.

Advertisement

Next Up in Leadership

Advertisement