IJCA Vol 4 i1 2025 webmag - Flipbook - Page 65
2025 | Volume 4, Issue 1
policy guidance document for the use of AI, most
respondents answered “No.” This included 80% of
inspection agencies, 62.5% of testing and calibration
laboratories, and 45 % of certi昀椀cation bodies. All
respondents in the “Other” category selected “Unsure.”
Open-Ended Comments from Respondents
Concerns and Cautions
• AI can often be thought prompting; however, it
requires existing knowledge to sense-check.
Misinformation is frequent.
65
cannot do—for example, comparing MDL studies
across multiple instruments.
• We use ChatGPT Teams, which does not share our
data for training its AI.
• Advantages include:
– E昀케ciency and speed: AI can accelerate data
collection, analysis, and reporting.
– Resource optimization: Automating routine tasks
lets auditors focus on complex evaluations.
Balanced Perspectives
• The use of AI in conformity assessments could lead • AI has the potential to improve e昀케ciency, risk
to fraudulent reporting and decreased con昀椀dence in
the validity of the certi昀椀cation process.
• Not a good idea now to implement AI.
• AI seems to be uncontrollable.
• Any large model AI that I am aware of requires data
being sent to the cloud for processing—an inherent
risk to our data security.
• Managing sensitive data through AI systems raises
concerns about data privacy and security.
• Performing an assessment using AI is not feasible
now. However, using AI to assist with repetitive
tasks or ensure speci昀椀c clause content may help
improve the quality of technical reviews.
• Regulatory approval for AI-based methods can be
challenging due to the need for extensive validation.
• Our company basically does not use arti昀椀cial
intelligence in our work, so we cannot give effective
opinions or suggestions.
• We are not implementing the use of AI in our
organization; it is a concern.
• When looking at AI in conformity assessment, I
worry about keeping things fair and consistent.
Human assessors sometimes interpret standards
differently. I’m more concerned that those
developing AI may program it to follow one narrow
path, potentially limiting 昀氀exibility and missing the
broader context needed in assessments.
• Still a lot to learn, so no need to jump on the
bandwagon yet. Remember, “If it isn't broke, don’t
昀椀x it.”
Opportunities and Use Cases
• AI could be used to analyze client documents more
quickly.
• AI can help streamline calculations that LIMS
management, and decision-making. However, its
implementation must comply with ISO/IEC 17021-1,
accreditation body rules (such as IAS MSCB 002),
and ethical standards.
• Standards like ISO/IEC 17021-1 require certi昀椀cation
bodies to demonstrate competence and impartiality.
AI tools must support, not compromise, these
principles.
• The key is to use AI as a tool to enhance—not
replace—human expertise in certi昀椀cation and
accreditation.
Conclusion
The survey provides a valuable snapshot of how Arti昀椀cial
Intelligence (AI) is currently perceived and used across
conformity assessment bodies, including inspection
agencies, testing laboratories, and certi昀椀cation bodies.
Some organizations have begun using AI for tasks
like data analysis, content generation, and automating
routine processes. However, adoption remains uneven—
and signi昀椀cant concerns persis
Respondents cited a range of issues, including data
privacy and security risks, ethical considerations, and the
potential for job displacement. Many also noted a lack of
clear internal policies or guidance on AI use, highlighting
a need for industry-wide education and standards
Despite these challenges, the 昀椀ndings suggest that AI
holds promise for improving e昀케ciency, streamlining
work昀氀ows, and supporting decision-making—if
integrated thoughtfully. Moving forward, organizations
should prioritize transparency, data protection, and
continued human oversight to ensure that AI enhances
rather than replaces expertise.
Striking the right balance between innovation and
responsibility will be key to realizing AI’s full potential in
the conformity assessment 昀椀eld.