AI in Business: Transparency, Control, and Ethical Responsibility

PrintMailRate-it
​​​​​​​​​​published on 25 June 2025 | reading time approx. 5 minutes

Artificial Intelligence (AI) is increasingly being used in business processes. This includes automated decision-making, intelligent forecasting models, and integration into operational ERP systems and accounting software. However, as AI becomes more embedded in existing IT systems, the requirements for reliable auditing and evaluation of such systems also increase.
 

AI systems differ fundamentally from traditional IT systems: they operate based on data, often autonomously, and are subject to continuous learning and adaptation. This raises new questions - such as the transparency of decision-making processes, accountability in case of malfunctions, and documentation of training data and models.

Structural Elements of AI Systems and Goal Orientation

To integrate AI systems effectively into business processes, a clear structure aligned with defined AI objectives is essential. These objectives are not abstract but derived directly from the company’s strategic goals, taking into account both ethical values and legal or regulatory requirements. To achieve these goals, various elements of an AI system work together based on principles, procedures, and organizational measures. According to the German auditing standard IDW PS 861 (03.2023): Audit of AI Systems, an AI system can be described using the following components:

  1. Governance, Compliance und Monitoring
  2. Data
  3. Algorithims and Models
  4. Application
  5. IT Infrastructure

A Central Element of Responsible AI Use

The responsible use of AI requires a well-thought-out organizational structure and process setup, with firmly established mechanisms for governance, compliance, and monitoring. Companies need a clearly defined AI strategy that sets out policies for the development, implementation, and operation of AI systems. This strategy ensures that internal corporate goals as well as external requirements - such as legal, regulatory, and ethical standards - are taken into account and complied with.
 
Strategic guidelines are translated into concrete policies and procedures. These include documented responsibilities for operating and further developing the AI system, as well as processes for identifying and correcting weaknesses. A professional AI monitoring system also includes regular oversight of compliance with these measures and traceable documentation.
 
Furthermore, it must always be ensured that human oversight is possible. This is achieved through mechanisms that clearly signal when human intervention is required.

Data Quality and Responsibility

Data is the foundation of every AI system - accordingly, the requirements for its quality, origin, and use are high. Companies must ensure that both internally generated and external data sources meet ethical, legal, and technical standards. Guidelines should govern the entire data lifecycle: from identifying the source, assessing its suitability, to documenting and monitoring any changes.

Algorithms and Models: Transparency and Control

The underlying AI algorithm and model play a central role. They must be developed or adapted in a way that ensures traceable decisions aligned with the defined objectives and the required accuracy for the specific use case. Ethical principles such as fairness, human autonomy, and non-discrimination are integral to the development processes.

Any manual interventions or changes during the learning process are subject to a structured testing and approval procedure. Continuous development in live operations takes place under clearly defined conditions. Technical and organizational measures are also in place to ensure both the security of the model and the traceability of any changes made.

Selection and Implementation of AI Applications

When selecting and implementing AI applications, internal corporate guidelines must be observed. Ethical, legal, and regulatory requirements need to be considered already during the procurement process. Change management, testing, and approval procedures ensure that only authorized and verified versions of an AI system are deployed in production. The ongoing operation is monitored using suitable metrics, such as response time or system stability, to avoid disruptions or performance degradation.

Requirements for IT Infrastructure and IT Security

The IT infrastructure used to operate an AI system must meet the specific requirements of the deployment scenario. A security concept based on the AI strategy is therefore essential. This concept includes logical access controls, protective mechanisms against malware, physical security precautions, and effective backup and data recovery procedures. These measures are designed to protect the system against manipulation, data loss, and unauthorized access.

Conclusion: A Holistic Approach to the Use of AI Systems

The deployment of AI systems is much more than a technical decision – it requires a well-thought-out organizational, procedural, and ethical integration. This is especially essential in areas with financial impacts or relevance to the annual financial statements. The auditing standard IDW PS 861 provides companies and auditors with a comprehensive framework for evaluating these complex systems for the first time.

Rödl & Partner supports you with expertise and experience – from strategy development and implementation to auditing and risk assessment of AI systems in line with the new standards.​

From the newsletter




Contact

Contact Person Picture

Frank Reutter

Partner

+49 221 949 909 316

Send inquiry

Contact Person Picture

Tassilo Föhr

Manager

+49 731 96260 14

Send inquiry

Contact Person Picture

Enes Arslan

Associate Partner

+49 221 9499 09335

Send inquiry

Skip Ribbon Commands
Skip to main content
Deutschland Weltweit Search Menu