Data Analytics

Session Recap - Gen AI Isn't Magic: A Realistic Look at AI in Long-Term Care

May 9, 2025
6min

Generative AI is everywhere — from viral headlines to embedded features in your favorite apps. But beyond the buzz, long-term care (LTC) organizations face a practical question: what can AI actually do for us, today?

At a panel session at the Ohio Health Care Association's annual conference, Aryeh Hoffman and Shalom Reinman from Megadata along with Steven Feld from Popela Tech tackled this question head-on. Their message was clear: AI isn’t a silver bullet — but when applied thoughtfully, it can drive real, measurable value.

This blog post captures the essence of that session, giving long-term care leaders a grounded perspective on how to navigate AI adoption and data management with clarity and confidence.

What Generative AI Can (and Can’t) Do

Generative AI, at its core, is a text prediction machine. It works by analyzing patterns in massive amounts of data to predict what words — or actions — should come next. That simple mechanism powers some remarkably complex tasks.

Where Gen AI excels:

  • Text processing and summarization
  • Understanding and rephrasing complex documents
  • Extracting insights from unstructured data
  • Acting as a dynamic knowledge assistant

Aryeh Hoffman, VP Technology at Megadata, shared how he uses tools like ChatGPT to quickly digest dense CMS rulebooks or PBJ reporting guidelines — documents that used to take weeks to analyze:

"What was taking us a couple of weeks or months… we got the results back in a couple of hours."

But AI isn’t flawless. Its limitations include:

  • Fuzzy logic or math skills
  • A tendency to hallucinate (confidently generating wrong answers)
  • Overreliance on training data that may not match your specific use case
"Just because it sounds right, doesn’t mean it’s right," Aryeh cautioned.

The takeaway? Use Gen AI as an assistant, not an oracle.

Practical Applications in Long-Term Care

Steven Feld, a tech-forward LTC operator turned Co-founder & Chief Solutions Officer of Propela Tech (www.propela.tech), emphasized that the best AI use cases start with real operational pain points, not tech trends:

"We didn’t build AI tools because AI was cool — we did it because we needed to solve real problems faster, and the tools weren’t out there."

Here are just a few real-world use cases discussed in the session:

Policy Assistant for Frontline Staff

Operators created internal AI chatbots trained only on their own policies — giving staff quick access to procedures like hand-washing protocols, without generic or irrelevant answers.

"Instead of flipping through binders, they ask a question and get the right policy back — no guessing, no Googling," Steven explained.

Gradual Dose Reduction Analysis

This complex regulatory requirement often gets buried in clinical documentation. Using Gen AI, the team built a tool that:

  • Mapped branded vs. generic drug names
  • Recognized dosage changes written in various formats
  • Identified dose reductions across entire patient populations

And it did this automatically, without manual chart reviews.

Context-Aware Fall Detection

Searching for “fall” in a note might return “Patient returns in the fall,” but AI can distinguish intent and context — pulling only actual fall events.

Workforce Productivity Monitoring

By blending timeclock data with actions from EHR systems like PCC, the team generated dashboards productivity by activity — all with a single AI-generated prompt.

Seeing Through the Hype

The panel drew a compelling parallel between today’s AI wave and the dot-com boom of the late '90s. Everyone wanted a website back then — not all of them were useful.

"We’re at that same moment now. A lot of companies will say they’re using AI... but ask, what is the AI actually doing?"

Instead of chasing features, organizations should focus on lasting value. The companies that succeed with AI will be the ones who solve real problems first, then layer in the right technology.

AI as a Supplement, Not the Product

There was unanimous agreement: AI works best as a supplement to good data and well-run systems. It’s not here to replace your schedulers, clinical decision-makers, or finance teams.

But it can supercharge them.

That means structured data matters — a lot.

Megadata’s approach? Provide a centralized data warehouse that consolidates inputs across EHRs, time clocks, payroll, and more. That data becomes the fuel for AI tools, whether built in-house or by third-party vendors.

"You don’t need to build every AI tool yourself. But you do need clean data. That’s what makes all of this work."

How to Evaluate AI Tools in LTC

To wrap up, the panel offered a simple framework for evaluating AI products:

  1. Does it solve a real problem you face today?
  2. Can it use your actual data effectively and securely?
  3. Does it supplement your existing workflows instead of replacing them?
  4. Can you test and measure its impact in a limited rollout?

If the answer to any of these is “no,” pause before adopting. Focus on fit and function, not flash.

Final Thought: AI for Everyone

You don’t need a team of developers or a massive budget to start using AI. In fact, you probably already have — through tools like ChatGPT or embedded features in the software you use daily.

"Even at home," Aryeh said, "I’m using AI to change lightbulbs and fix things I never thought I could."

In long-term care, where time and precision matter, that kind of leverage can mean faster answers, better care, and smarter decisions. But only if we stay grounded in what matters most: our people, our processes, and the problems we’re solving.

Similar posts