Clinsota
← Back to English Insights

Medical device regulatory insight · 2026-01-09

New FDA CDS Guidance: Which Software Can Stay Outside Device Regulation, and Which Falls Back Under It?

Not a medical device, or still subject to regulatory submission? The latest FDA CDS guidance provides a clear answer

New FDA CDS Guidance: Which Software Can Stay Outside Device Regulation, and Which Falls Back Under It?
Based on the original text: Guidance-Clinical-Decision-Software_3.pdf (26 pages total) | Interpretation

1. First clarify the regulatory positioning: this is a “boundary guide,” not a “blanket exemption”

The first paragraph states its legal status very plainly: this guidance “represents the FDA’s current thinking” and “does not establish legally enforceable responsibilities,” and companies may use alternative approaches so long as they comply with applicable laws and regulations. [p4 | opening statement] In practice, however, its role is more like this:
• When you engage in Pre-Sub/Q-Sub communications, reviewers will use it as a framework for questions;
• When you draft intended use/labeling, it determines whether one “non-device” sentence will still drop you back into “device” territory;
• When you split a product (launch lower-risk functions first and iterate higher-risk functions later), it determines whether you can move quickly. [p7 | IV overview]

2. The “statutory core” of this guidance: the four criteria in 520(o)(1)(E), all of which must be met

At the beginning of Chapter IV, the FDA provides a highly important “decision rule”: for a software function to be excluded from the device definition, it must “meet all four criteria.” The FDA even summarizes the logic of the four criteria in one sentence:
• It does not acquire, process, or analyze patterns/signals from medical images, IVD signals, or signal acquisition systems (criterion 1);
• It processes “medical information” (criterion 2);
• It provides recommendations to the HCP rather than replacing their judgment (criterion 3);
• It enables the HCP to independently review the basis so they do not primarily rely on it (criterion 4).
[p7 | IV overview] Practical takeaway for companies: this is an “AND” logic (1&2&3&4). If you fail any one criterion, the FDA will treat you as a device.

3. Criterion 1: do not “touch signals, patterns, or images” — this is the most common failure point

The wording of criterion 1 is strict: if software is “intended to acquire, process, or analyze a medical image, or a signal from an in vitro diagnostic device, or a pattern or signal from a signal acquisition system,” and its purpose falls within the device definition, it remains a device and is regulated by the FDA. [p7 | first paragraph of IV(1)]

·3.1 What is a medical image? [p7 | IV(1)]

·The FDA explains that images generated by CT, X-ray, ultrasound, MRI, and similar systems are all medical images; even images “not originally acquired for a medical purpose” will still be treated as medical images if you use them for medical processing/analysis. [p7 | second paragraph of IV(1)]

·3.2 What is a signal? [p7-8 | IV(1)]

·Signals include electrochemical/photometric signals generated by assay reactions in IVDs, as well as physiological parameter signals produced by various signal acquisition systems. The FDA also gives typical forms of signal acquisition systems: sensors + electronics + software generating signals (such as ECG), sample/specimen acquisition (digital pathology), and imaging systems generating images. [p7-8 | bullet points in IV(1)]

·3.3 The most critical term: how exactly is “pattern” defined? [p8 | IV(1)]

·The FDA makes this clear: pattern refers to “multiple, sequential, serial, or repeated” measurements, whereas “discrete, sporadic, or intermittent point-in-time measurements” (such as a one-time vital sign measurement in an outpatient visit) generally do not constitute a pattern. It also provides typical examples of patterns: ECG waveforms/QRS, NGS sequence and variant call files (VCF), and continuous glucose readings over time from CGM. [p8 | IV(1)]

·3.4 Practical application for companies: how do you use criterion 1 to design the “product boundary”?

·Break product functions into two layers:
• Layer A (more likely to qualify as non-device): ingest “report-level/result-level” information (for example, “imaging report conclusion/BI-RADS,” “physician-annotated ECG conclusion,” or a single blood pressure/laboratory result), without ingesting raw waveforms or continuous data streams. [p10 | example in IV(2)]
• Layer B (very likely a device): directly analyze images, waveforms, or continuous trends (such as CGM/continuous ECG monitoring/NGS sequences), which usually puts you back on the device pathway. Criterion 1 is a hard threshold. [p7-8 | IV(1)]

4. Criterion 2: what you process is “medical information,” not a “signal stream”

The core of criterion 2 is that the input must be medical information (patient medical information or other medical information), such as clinical studies or guidelines. As long as the input falls into this category and the other three criteria are also met, the software function may fall outside device regulation. [p9 | first paragraph of IV(2)]
The FDA offers a very practical distinction: a single, discrete, clinically meaningful measurement result (such as a one-time blood glucose laboratory result) is medical information; continuous sampling of the same parameter (such as continuous CGM readings) is a pattern/signal, which returns you to criterion 1 and remains a device. [p10 | second paragraph of IV(2)]

·Examples of inputs permitted under criterion 2 (you can compare these directly against your product’s data dictionary): [p9-10 | IV(2)]

·Patient information such as demographics, symptoms, certain test results, and discharge summaries; [p9 | IV(2)]
·“Independently verifiable information” such as clinical practice guidelines, peer-reviewed studies, textbooks, approved drug/device labeling, and government recommendations; [p9-10 | IV(2)]
·Radiology reports (for example, BI-RADS conclusions) and summarized outputs from legally marketed CAD (for example, “12 CAD markers identified”); [p10 | IV(2)]
·ECG reports annotated by an HCP (for example, “suggestive of atrial fibrillation”); [p10 | IV(2)]
·A single blood pressure result from a legally marketed device, laboratory results in the EHR, etc. [p10 | IV(2)]

·Implementation reminder for companies:

·If your current algorithm requires “raw waveforms/continuous trends” to function, then if you want to pursue a non-device route, this usually cannot be solved by wording alone; you need to change the product architecture so the algorithm ingests “result-level summaries” or “physician-confirmed conclusions.” [p10 | IV(2) + p7-8 | IV(1)]

5. Criterion 3: you provide a “recommendation,” not an “instruction” — one sentence can determine market access difficulty

The FDA’s interpretation of criterion 3 is critical: software may provide “disease-specific” or “patient-specific” recommendations to enhance, inform, or influence HCP decision-making, but it is “not intended to replace or direct the HCP’s judgment.” If the software provides a “specific preventive, diagnostic, or treatment output or directive,” it does not satisfy criterion 3 and remains a device. [p10 | IV(3)]
There is a point here that companies care about greatly: what if there is only one recommended option? The FDA says that if only one option is “clinically appropriate,” and the other criteria are also met, the FDA “intends to exercise enforcement discretion” (i.e., it does not intend to enforce the requirements under the FD&C Act). [p11 | first paragraph of IV(3)]

·Questions for companies to consider:

·Can you change the output from a “system conclusion” to “options/prompts for the physician”?
• “Suggest that the physician consider A/B/C” is more consistent with criterion 3;
• “Diagnosis is X; recommend immediate treatment with drug Y” is more like an instruction and is more likely to be treated as a device. And do not forget: criterion 3 also emphasizes that the software should not replace/direct the HCP. [p10 | IV(3)]

·In IV(3), the original text provides several examples of “the same topic implemented differently,” showing exactly where a product falls back into device status:

·For example, a tool that predicts future cardiovascular event risk may itself satisfy the criterion; but if it introduces genetic variant data with “no established association,” or turns into a time-sensitive prediction such as an event “within the next 24 hours,” it returns to the FDA’s device regulatory focus. [p11 | examples in IV(3)]

6. Criterion 4: what the FDA is really concerned about is not that your software is smart, but that physicians “automatically trust it”

Criterion 4 is the part of this guidance that most resembles a practical implementation guide: the FDA presents it as an executable checklist. The core requirement is that the software must enable the HCP to “independently review the basis for the recommendations,” and it should not be intended for the HCP to rely primarily on the system’s recommendation when making decisions for an individual patient. [p14 | first paragraph of IV(4)]
The FDA also makes one scenario clear that is almost a veto by itself: if your software is used for “time-critical” tasks or decisions, the FDA will generally consider criterion 4 not satisfied, because the physician does not have sufficient time to independently review the basis. [p14 | IV(4)a)]

·6.1 Criterion 4 compliance checklist (recommended for direct use in your PRD/compliance review):

·a) Describe the product purpose/intended use, intended HCP users, and intended patient population; and avoid time-sensitive use cases. [p14 | IV(4)a]
·b) Specify the required input medical information: how it will be obtained, why it is relevant, and what data quality requirements apply.[p14|IV(4)b]
·c) Explain algorithm development and validation in plain language so the intended HCP can understand the basis for it:[p14|IV(4)c]
·i. Summarize the methods: meta-analysis, expert consensus, statistical modeling, AI, ML, etc., and provide appropriate detail (logic/methodology).[p14|IV(4)c-i]
·ii. Describe the data: population representativeness, key subgroups, disease conditions, collection sites, sex, race/ethnicity, etc., and describe best practices (independent development/validation datasets).[p14|IV(4)c-ii]
·iii. Provide clinical study validation results so HCPs can assess performance and limitations (which subpopulations were not tested or showed substantial performance variability).[p15|IV(4)c-iii]
·d) Include key patient-relevant knowns and unknowns in the output, such as missing, anomalous, or contaminated inputs, to help clinicians form their own judgment.[p15|IV(4)d]
FDA also specifically emphasized two statements that anyone drafting submissions should copy into their templates:
• Regardless of how complex the algorithm is or whether it is commercially confidential, sufficient background information should be provided and written in plain language.[p15|IV(4)]
• Information should be presented in a way that avoids information overload: prioritize the information most relevant to decision-making, with details available on demand.[p15|IV(4)]
Industry interpretation: Standard 4 is really intended to prevent automation bias—when clinicians face a system, they may default to assuming the system is more accurate and faster, and then simply “do what it says.” Before the examples section, the original text specifically notes that time pressure and single recommendations can amplify automation bias; this is also the underlying logic behind FDA's interpretation of Standard 4.[p16|automation bias section]

7. How to use the examples in Chapter V: treat them as the FDA's case library for you

The biggest pain point for many companies reading the guidance is this: they understand the abstract provisions, but do not know how to apply them to a specific product. Chapter V is the FDA's “case library”: which functions are Non-Device CDS, and which remain Devices.[p16|V introduction] One internal exercise is strongly recommended: compare each function on your feature list against these examples one by one and label which example it most resembles.

·7.1 One of the most valuable uses: use the examples to work backward and identify “which piece of evidence or explanation you are missing”

·V.B specifically provides examples of “how the basis should be presented to satisfy Standard 4” (that is, the content you need to include in the software interface, IFU, or technical documentation). If your product can only provide conclusions, but cannot provide the basis, limitations, or data representativeness, then Standard 4 is essentially where you will get stuck.[p20|V.B]

·7.2 Typical signals that a function “still is a device”: imaging, 3D modeling, or automated treatment plan generation

·For example: a function that manipulates/analyzes images from radiological devices and generates a 3D model for surgical planning is explicitly listed as a device function because it analyzes medical images and is not merely displaying/analyzing medical information, nor is it presented simply as a recommendation.[p22|V.C example]

8. Market access threshold map: how to stratify from “easiest” to “hardest”?

If you translate the four standards into a market access strategy, you will generally land in three tiers: A) Non-Device CDS: all four standards are met → in principle, not regulated as a device. B) Enforcement discretion: in certain circumstances under IV(3), FDA indicates an intention not to enforce, but you must still be very careful to satisfy the prerequisites.[p11|IV(3)] C) Device software function: fail any one standard → enter the device pathway (510(k)/De Novo/PMA depending on risk and predicate device).[p7|IV overview]
In practice, you can quickly assess the level of difficulty with three questions: 1) Am I analyzing images, continuous signals, or patterns? (If yes → most likely C)[p7-8|IV(1)] 2) Is my output an “instruction/conclusion” rather than a “recommendation/option”? (If yes → more like C)[p10|IV(3)] 3) Can the clinician immediately see the basis, limitations, and data representativeness within the software? (If no → Standard 4 becomes the blocker)[p14-15|IV(4)]

9. A compliance approach companies can start implementing immediately

The section below translates the original requirements into concrete deliverables your team can produce (documentation/interface/evidence).

9.1 Claims and labeling (so one sentence from marketing does not push you into 510(k))

·Establish a “claims blacklist”: avoid directive wording such as “automatic diagnosis,” “system determination,” or “treatment required” (corresponding to Standard 3).[p10|IV(3)]
·In the IFU/labeling, clearly specify the intended HCP user, patient population, and non-time-critical use scenario (corresponding to Standard 4a).[p14|IV(4)a]
·Make the “basis for the recommendation” accessible from the interface: guideline version, literature, drug label version date, etc. (corresponding to the overall requirements of Standard 4).[p15|IV(4)]

9.2 Input data and data quality/

·Create an “input classification table”: which inputs are medical information (permitted), and which are patterns/signals (high risk).[p10|IV(2) vs p8|IV(1)]
·For each input, clearly document the collection method, quality requirements, and handling strategy for missing/anomalous data (corresponding to Standards 4b and 4d).[p14|IV(4)b;p15|IV(4)d]

9.3 Algorithm transparency (turn Standard 4c into a template)

·Method summary: are you using expert rules, a statistical model, or AI/ML? Present it at a level HCPs can understand (corresponding to 4c-i).[p14|IV(4)c-i]
·Data representativeness statement: key subgroups, collection sites, sex, race/ethnicity, etc.; and explain the use of independent development/validation datasets (corresponding to 4c-ii).[p14|IV(4)c-ii]
·Performance and limitations: validation study results, untested subpopulations, and subgroups with substantial performance variability (corresponding to 4c-iii).[p15|IV(4)c-iii]

9.4 Interface and output (so clinicians can “independently review,” while avoiding information overload)

·The output should include the “why”: show the key patient factors that triggered the recommendation and indicate missing data (corresponding to 4d).[p15|IV(4)d]
·Layer the presentation: show the information most relevant to decision-making first, with details expandable, to avoid information overload (FDA explicitly mentions avoiding overload).[p15|IV(4)]

10. What market opportunities does this wave of “relaxation” actually create?

The opportunity is not to “do less compliance,” but to “segment the pathway more intelligently.” Once the guidance clarifies the boundary, you can:
• First launch as Non-Device CDS: use result-level information plus explainable output to enter the market quickly (provided all four standards are met).[p7|IV overview;p9-10|IV(2);p14-15|IV(4)]
• Then iterate toward a device function: when you want to use raw signals/images or make time-sensitive decisions, prepare for the 510(k)/De Novo pathway, but by then you will already have accumulated real-world use experience and data. (Note: this is a strategic inference; the original text provides the boundary conditions and risk considerations.)[p14|IV(4)a;p16|automation bias]

11. Turn the “guidance provisions” into a “benchmarking checklist”

The most common pain point for companies is not that they cannot understand the provisions, but that no one has broken them down into “which documents/interface elements/validation activities are required.” A practical work package can be structured as follows:
1) Assess each of the four standards one by one + decompose functions (against the examples in Chapters IV and V).[p7|IV;p16|V]
2) Rewrite claims/labeling and review risk-triggering phrasing (against IV(3)).[p10-11|IV(3)]
3) Standard 4 evidence package template: algorithm description, data representativeness, validation results, limitations, and missing-data handling (against IV(4) a-d).[p14-15|IV(4)]
4) Prepare Pre-Sub communication materials when necessary (so FDA can signal agreement on the boundary early).

12. Closing in one sentence: this FDA “relaxation” still requires everyone to clearly explain the “basis”

After reading this guidance, you will see that what FDA repeatedly emphasizes is not “whether you can use AI,” but whether clinicians can understand why you are making the recommendation, whether it is appropriate for the patient in front of them, what the limitations are, and whether time pressure may force them to “primarily rely on the system.”[p14-16|IV(4)] If you put these elements in place, your market access threshold will come down in a very real way.