Purpose of the framework
This ethical framework provides a foundation for the responsible use of AI within Bracknell Forest Council, guiding decisions to ensure all AI applications uphold privacy, fairness and accountability. The framework distinguishes between personal productivity tools and engineered AI solutions to address the unique ethical and data protection considerations of each.
This framework will operate alongside the council’s AI strategy to ensure compliance with data protection laws and ethical standards, supporting our commitment to transparency, accountability and resident trust.
1. Key principles of ethical AI and data protection
Our ethical framework is built on key principles that guide AI use and safeguard the rights and privacy of individuals.
1.1 Transparency and explainability
AI applications must be transparent, providing understandable explanations for their functions, data sources, and decision-making processes.
Clear communication
The council will document and communicate the purpose and capabilities of each AI application, making sure stakeholders understand how AI supports council services.
Explanatory models
AI models, especially engineered solutions, will provide clear explanations of how outputs are generated. Explanations will be adapted for a broad audience to support public understanding.
Auditable trails
Both personal productivity tools and engineered solutions must include features to trace and log decisions, creating a verifiable record to support transparency and accountability.
1.2 Fairness and non-discrimination
AI tools must operate fairly, ensuring compliance with the Equality Act 2010 and avoiding biases.
Bias assessment
All AI tools, especially engineered solutions, will undergo regular assessments to identify and mitigate biases. This to be detailed in the council’s terms and conditions of engagement.
Data inputs will be carefully reviewed to make sure they do not disadvantage any group or individual.
Diverse training data
AI applications must be trained on diverse and representative datasets to prevent biased outcomes.
Non-discriminatory outcomes
AI outputs will be monitored for unintended consequences, with mechanisms to contest outcomes that adversely impact protected groups or individuals unfairly.
1.3 Accountability and governance
A robust governance structure will oversee all AI applications, ensuring responsible, transparent decision-making.
Human oversight
Council officers will maintain control and responsibility for final decisions. Both personal productivity tools and engineered solutions will serve as aids to human judgment, not replacements.
Continuous review
AI applications will be regularly reviewed by the Information Management Group (IMG) for alignment with data protection and ethical standards.
Escalation procedures
If ethical or data protection concerns arise, the issue will be escalated to the Data Protection Officer (DPO) and the Senior Information Risk Owner (SIRO) for immediate action.
1.4 Data minimisation and security
AI systems must only process necessary data, operating securely to protect individuals' personal information.
Data minimisation
AI tools, especially personal productivity tools, will only process data essential for immediate use, with strict limitations on data retention.
Secure storage
All data used by AI applications will be securely stored, with robust access controls and encryption.
Regular security audits
AI systems will undergo routine security checks to detect vulnerabilities and ensure compliance with UK GDPR and data protection laws.
1.5 Contestability and redress
Residents and stakeholders must have accessible options to challenge or seek redress for decisions made with AI support.
Clear redress mechanisms
The council will establish transparent processes to contest decisions involving AI tools, allowing individuals to request explanations and appeal outcomes.
Ethics and fairness reviews
Regular reviews of both productivity tools and engineered solutions will assess their alignment with council ethics policies, offering a fair avenue for residents to raise concerns.
Personal productivity tools, such as document drafting or summarisation tools, will be implemented to support council operations by improving staff productivity without making independent decisions. Specific considerations include:
Transparency in output
Where appropriate, in public documents, staff must disclose when productivity tools are used to assist in generating council documents or communications.
Data sensitivity
Users generating content using productivity tools must adhere to data minimisation standards, avoiding the processing of sensitive or resident-identifiable data unless essential.
Human oversight and final review
Outputs from productivity tools must undergo human review to make sure they meet quality standards, accuracy and alignment with council values.
3. Ethical and legal considerations for engineered AI solutions
Engineered AI solutions, such as automated systems for handling high-volume processes, introduce unique ethical and operational considerations. The council will make sure these tools align with rigorous governance standards, supporting high-stakes processes transparently and responsibly.
Traceable decision making
Engineered solutions must include audit trails that document each stage of decision-making, allowing for clear, verifiable accountability.
Automated process oversight
Each engineered AI application will operate under the guidance of council officers, with a designated officer responsible for oversight.
Fairness and compliance assessments
Engineered solutions will undergo regular bias and compliance checks to make sure they operate fairly, transparently and without introducing discriminatory impacts.
4. Biometric processing and data protection
The council recognises that biometric data, such as facial recognition or fingerprints, is classified as special category data under UK GDPR. Any AI applications processing biometric data will adhere to the following standards, ensuring strict compliance with data protection laws and ethical guidelines.
Purpose limitation
Biometric data will only be collected and processed when it serves a clear, essential purpose, such as security verification or specific public safety applications. Use of biometrics will be limited to cases where less intrusive methods are not viable.
Data storage and security
Biometric data will be securely stored with high levels of encryption and access will be restricted to authorised personnel only.
Individuals will be informed when biometric data is collected, with a clear explanation of the purpose and how the data will be used, ensuring transparency and upholding individual rights.
Compliance with regulatory standards
Any biometric processing will follow guidance from the Home Office on ethical biometric data use, ensuring proportionality, necessity and compliance with national privacy and security standards.
If a Data Protection Impact Assessment (DPIA) identifies a high risk and we cannot take measures to reduce that risk, we will not begin the processing until we have consulted with the Information Commissioner’s Office (ICO) in line with UK GDPR.
5. Data protection implementation procedures
To implement this framework effectively, Bracknell Forest Council will adopt rigorous data protection measures across all AI projects.
DPIA will continue to be needed for all new system. For those with an AI component, the AI function should be detailed and highlighted in the DPIA assessment process.
5.1 Data processing and retention protocols
Data used in AI applications will be minimised, with strict protocols to control data access and retention.
Controlled access
Access to AI systems and data will be limited to authorised personnel, with comprehensive logging to monitor actions.
Periodic data reviews
Data usage will be reviewed regularly to ensure relevance and compliance, with retention protocols that allow for data deletion or anonymisation when data is no longer needed.
The IMG, chaired by the SIRO, will oversee all AI applications to make sure they meet data protection and ethical standards.
Compliance audits
Regular audits will verify compliance with GDPR and council data policies.
Ethics reviews
IMG will conduct ethics reviews to assess the fairness of AI applications, addressing potential biases and adjusting practices as necessary.
Incident reporting
Any breaches or ethical concerns will be reported to the DPO and SIRO, ensuring quick and effective resolution.
5.3 Stakeholder engagement and education
To reinforce public trust, Bracknell Forest Council will engage residents, staff and other stakeholders throughout the AI lifecycle.
Clear, accessible information on AI applications will be provided to stakeholders, detailing purpose, benefits, and safeguards.
Offer education opportunities
Training sessions for staff will build a thorough understanding of AI’s role, limitations, and ethical implications.
Feedback collection
Regular feedback from residents and staff will help identify concerns and support ongoing alignment with council values.
6. Commitment to ethical AI and continuous improvement
Bracknell Forest Council commits to continuous improvement and adaptation of its AI Ethical Framework.
As technology and regulatory landscapes evolve, this framework will be updated to uphold best practices in data protection, transparency and fairness, fostering public trust and responsible AI use.