Skip to main content
search

< Back to all policies

Security Policies

Information Security Policy

Last reviewed 2 February 2026.

Introduction

This policy describes the objectives, and commitments to meeting those objectives, that BTL Group Ltd trading as Surpass Assessment and its wholly owned subsidiary, Surpass Assessment Inc (collectively, the Group) has
set to maintain the confidentiality, integrity, and availability of its data.

An Information Security Policy is a requirement of ISO 27001:2022 Standards. Section 5.2.

Scope

This policy applies to all employees, contractors, and third parties that access, process, or manage information on behalf of the Group.

This policy applies to the Group and geographical areas where it operates unless specific local exclusions apply, in which case the exclusion(s) shall be clearly stated in this section. In such cases where applicable legislation exists in more than one territory or geographical area, the more restrictive shall apply, and shall be clearly stated in this section.

Exceptions to this policy must be agreed in writing with the Compliance Group and recorded in the Surpass Risk Register.

Objectives

The objectives of this policy are to outline the Group’s intent to:

  • Protect the information assets that the Group handles, stores, exchanges, processes, and has access to and ensure the ongoing maintenance of their confidentiality, integrity, and availability.
  • Ensure controls are implemented that provide protection for information assets and are proportionate to their value and the threats that they are exposed to.
  • Ensure the Group complies with all relevant legal, customer, and other third-party requirements relating to information security.
  • Continually improving the Group’s Information Security Management System and its ability to withstand threats that could potentially compromise information security.

Policy statement

The Group has committed to achieving the objectives above by:

  • Implementing and maintaining an Information Security Management System that meets the requirements of ISO 27001:2022 and all applicable regulatory requirements.
  • Systematically identifying security threats and the application of a risk assessment procedure that identifies appropriate control measures for implementation.
  • Regularly reviewing security threats and the testing/auditing of the effectiveness of control measures.
  • Maintaining a risk treatment plan that is focused on eliminating or reducing security threats.
  • The maintenance and regular testing of business continuity plans for all critical services.
  • Having a clear definition of responsibilities for implementing and managing the IMS.
  • Establishing information security objectives at relevant functions and levels.
  • Provisioning appropriate information, instruction, and training so that all employees are aware of their responsibilities and legal duties and can support the implementation and management of the IMS.
  • The implementation and maintenance of a suite of supporting documents that provide detail on how the objectives of this policy are achieved, and guidance on how to achieve them.
  • Ensuring that the adherence to this policy is a condition of employment for all colleagues.
  • Implementing measures to ensure all organisations working for and on behalf of the Group who access or process any of the Group’s data meet all applicable information security requirements.
  • Ensuring that this policy is available to interested parties, and significant and relevant changes to the policy are communicated.
    • Implementing measures to ensure all information security incidents are reported to the Information Management team.
  • Handling violations of this policy in line with the company’s Disciplinary Policy.

Review

This policy will be reviewed at least annually and when significant changes to the business impact the
Information Security Management System

Surpass Test Centre CCTV Policy

Last updated 10 April 2025.

Introduction

This policy outlines the use of Closed-Circuit Television (CCTV) within the Surpass Test Centre (Salts Mill, Victoria Rd, Saltaire, Shipley, BD18 3LF, UK) to ensure the safety and security of candidates, staff, and property, and to support the integrity of the examination process.

Scope

This policy applies to all CCTV systems operated within the test centre examination room and covers all individuals within the room, including candidates, staff, visitors, and contractors.

Objectives

  • To ensure a safe and secure environment for all test centre users
  • To deter and detect criminal activity, malpractice, or breaches of examination regulations
  • To support the investigation of incidents or complaints
  • To comply with data protection and privacy legislation

CCTV Operation

  • CCTV is in operation in the examination room
  • Cameras will not be installed in private areas such as restrooms or changing rooms.
  • CCTV systems are operated and monitored by authorised staff only.

Data Protection and Privacy

  • All footage is recorded in accordance with data protection legislation (e.g., UK GDPR, DPA 2018).
  • Recorded images may be used for investigation purposes or shared with relevant awarding bodies or authorities if required.
  • Individuals have the right to request access to their personal data captured by CCTV, subject to standard data subject access request procedures.

Retention and Storage

  • Recorded footage will be stored securely and retained for a period of 30 days, unless required for an ongoing investigation.
  • After the retention period, footage will be securely deleted or overwritten.
  • Requested footage will be securely deleted once any investigations have been closed.

Signage

  • Clear signage is displayed throughout the premises to inform all individuals that CCTV is in operation.

Access and Disclosure

  • Access to recorded footage is restricted to authorised personnel only.
  • Disclosure of footage to third parties (e.g. awarding bodies) will only be made when required or permitted through agreed processes.
  • A record of all disclosures will be maintained.

Review and Compliance

  • This policy will be reviewed annually or in response to changes in legislation or operational requirements.
  • Non-compliance with this policy may result in disciplinary action or legal consequences.

Further information and Questions

  • Any further information and usage of CCTV recordings should be directed to the awarding body.

Surpass AI Policy

Last updated 19 January 2026

1. Purpose and Scope

At Surpass Assessment, we recognise the transformative potential of artificial intelligence (AI) to enhance our operations, products, and services. As a leader in digital assessment technologies and services, we believe that AI can accelerate our mission to improve the assessment experience for everyone. However, we will do so responsibly, with careful consideration of the potential risks and impacts that AI may have on employees, customers, and candidates.

This Policy outlines our commitment to responsible AI within our development, deployment and supply of our own services to customers, as well as our internal corporate usage by staff of 3rd party services, to ensure AI risks are responsibly managed, and compliance to emerging regulation is achieved.

The following AI Policy provides a framework to guide all activities at Surpass Assessment related to AI. This Policy applies to all employees, contractors, and third-party suppliers in all geographical regions that the  business operates, in the procurement, development, and use of AI within Surpass Assessment.

Exceptions to this policy must be agreed in writing with the Compliance Group and recorded in the Surpass Risk Register.

This policy will be reviewed by the Compliance Group at least twice a year and when significant changes occur.

2. Definitions

Some definitions will follow those set out by leading industry standards but may have been adjusted or simplified for ease of understanding or in context of Surpass.

Surpass, Surpass Assessment or the company means BTL Group Ltd. t/a Surpass Assessment, Surpass Assessment Inc. or any other wholly owned or controlled legal entities under the umbrella of Surpass.

Users means any employee, contractor, or third-party supplier, working for or on behalf of Surpass, as covered under the scope of this policy.

AI means an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are  designed to operate with varying levels of autonomy.

AI model means a component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs.

AI system means any data system, software, hardware, application, technology, tool, or utility that operates in whole or in part using AI.

3. Acceptable Use and Confidentiality

Surpass Assessment is committed to protecting the confidentiality, integrity, and availability of its data, and that of its customers, employees, and partners. All staff and contractors must adhere to the following policies
when handling company or customer data:

  • Data Protection Policy;
  • Information Security Policy and Information Classification and Exchange Policy
  • Supplier Management Policy.
  • Acceptable Use of IT and Communications Policy

However, for the avoidance of all doubt, the following rules apply regarding the use of generative AI and other external AI services:

Prohibited without prior approval

Users must not input any confidential or restricted information into any non-approved AI systems. This includes but is not limited to:

  • Customer data
  • Candidate data
  • Personally identifiable information (PII)
  • Company financials or strategy
  • Source code, architecture diagrams, or proprietary algorithms
  • Customer communications (e.g. emails etc.)
  • Non-public documentation

Approved AI systems

The following AI systems have been approved for specific use cases and configured by Surpass to securely handle confidential information:

  • Microsoft Copilot, available to all staff and accessed using their work account. NOTE: Only the work
    version provides Enterprise data protection. Users must ensure they see this icon within the chat:
  • Microsoft 365 Copilot, accessed via the user’s Office 365 account (if users have been granted a
    company 365 Copilot licence)
  • ChatGPT for Business (if the user has been granted a company ChatGPT Business licence)
  • GitHub Copilot (for approved development teams)
  • Loopio (for proposal generation by the sales team)
  • Ironclad (for contract management processes)

Permitted non-confidential use cases

Per the Surpass Supplier Management Policy, and for the avoidance of doubt, the use of external AI websites and tools (such as Google Gemini, ChatGPT, etc.) are permitted as long as the activity does not involve confidential or restricted data.

If an AI tool is required to process confidential or restricted data, it must be authorised through the supplier onboarding process first.

Examples of where external tools can be used are:

  • Asking for definitions or explanations of technical terms or concepts.
  • Summarising a publicly available article for internal discussion.
  • Generating ten social media captions for an upcoming public event.

Examples of tasks that must only utilise authorised tools are:

  • Generating code improvements by uploading proprietary source code for analysis.
  • Asking for a summary of discussion points and actions from a meeting with a client.
  • Analysing and providing responses to customer complaints or queries.

When in doubt users should:

  • Treat information as confidential.
  • Only use company approved tools and systems.

Environmental Considerations

Employees should exercise sensible judgment when using AI, recognising that each prompt carries a resource and environmental cost. Employees should aim to keep requests purposeful, efficient, and only use AI where it is the most appropriate tool.

Personal Use of AI Systems

Company-provided AI tools are primarily for business purposes. Limited personal use is permitted in line with the Acceptable Use of IT and Communication Systems Policy and where additional compute costs and impact is minimal.

4. Responsible AI Principles

In alignment with our established corporate social responsibility values, we are committed to upholding responsible principles in the procurement, development, deployment, supply, and use of AI technologies.

Where we incorporate AI technology into our products and services that we provide to customers, our Responsible AI Principles encompass the following:

Customer-driven

Customers must maintain complete authority over the adoption and use of AI technology, in alignment with their own organisational principles, objectives, and guidelines.

Accurate and transparent

AI technology outputs should be accurate and include justifications and citations to sources so that users can determine whether outputs are accurate, reliable, and appropriate for their intended uses.

Unbiased

AI technology should incorporate processes that continuously aim to identify and eliminate any bias and discrimination for its designed use, so that it does not cause harm to others.

Secure and private

AI technology must have robust enterprise security controls and compliance with current industry standards, so that no customer-provided data (including prompts and outputs) is accessible to unauthorised parties or used for training data.

Human-monitored

AI technology should be designed to enable ongoing monitoring of AI outputs for error detection and correction, and to ensure ease of human oversight.

Intuitive

The inclusion of AI technology should be engineered to be intuitive and ensure a seamless user experience, so that only minimal additional training is required for users and SMEs.

Embedded

AI technology should be embedded directly within existing user interfaces wherever possible and appropriate, so that users and SMEs can take advantage of AI at the touch of a button within their existing workflows and processes.

Expertly directed

A dedicated, multi-disciplinary Advisory Board should provide insights and thought leadership so that the responsible use of AI technology within assessment products and services is thoughtfully and effectively developed.

Societally Responsible

AI technology should be developed and deployed with consideration for its broader societal implications. This mincludes evaluating potential effects on communities, individuals, and the environment, and taking proactive steps to minimise harm while promoting positive outcomes.

5. AI Strategy

As a leading provider of assessment technologies and services, AI has the power to improve quality and reduce costs in the production and delivery of assessments, adding significant enterprise value and bolstering our competitive advantage, as well as providing significant internal efficiency improvements.

We will buy, build, and supply AI systems and applications for the purpose of:

  1.  Improving internal efficiency or increasing quality of service to customers, including but not limited to processes relating to sales, marketing, legal, infosec, development and operations.
  2. Providing externally facing technologies or services to customers for them to streamline their own activities or increase the quality of their own services.

However, due to the rapid evolution of AI systems, we also foresee challenges for adapting the organisation to responsibly procure, utilise, and supply AI systems. Some of these challenges include: the rapidly changing industry regulatory requirements; rapidly evolving laws around the world; the lack of specialised dedicated staff in AI knowledge and skills; easily and freely available AI tools ranging across all degrees of security and reliability; and a very mixed appetite within various industries, including the assessment industry, related to the use of AI.

With these potential benefits and challenges in mind, a long-term business strategy shall be developed and implemented, incorporating AI initiatives in alignment with our principles, policies, and objectives. This strategy will also incorporate the perceived challenges and risks of procuring, utilising, and supplying AI systems, including the competitive and security considerations of not utilising AI. Continuous evaluation and adaptation of these objectives and risks will be crucial to ensure our long-term success and maximise opportunities provided by further technological advancements.

Recognising the need for customised, flexible, and unobtrusive organisational adaptation for AI, the contents of this AI Policy shall be used as an augmenting layer on top of Surpass Assessment’s existing governance, policies, and processes. We also anticipate that the adoption and transition of all aspects of this policy cannot be achieved overnight and will require a period of transition. Surpass Assessment also commits to continually improve the suitability, adequacy, and effectiveness of this AI Policy and of its AI management system.

While Surpass Assessment’s AI strategy will evolve into a complete practice over time, the urgency of AI risks requires that we identify areas of highest priority as a procurer, developer, and supplier of AI systems:

  • For bought systems:
    Responsibly procuring AI will require investment in the following capabilities:
    1. A rigorous and principles-driven procurement process that sufficiently weighs marketplace options and comprehensively assesses the risks attached to potential suppliers and their product or service;
    2. Role and system-specific training and educational materials for employees such that procured AI can be used safely and responsibly.
  • For built systems:
    Responsibly building AI powered systems will require investment in the following capabilities:
    1. A product management programme that balances the forward inertia of innovation with the necessary governance gates and other processes, such as AI impact assessments, to sufficiently manage risk.
    2. Processes and tools to enable systematic and comprehensive documentation of AI systems throughout their lifecycle, in accordance with regulatory requirements.
    3. Incorporation of the Responsible AI Principles, and the Secure Software Development Life Cycle (“SSDLC”) processes into the selection and integration of AI models.
    4. Responsible AI training, including in our Responsible AI Principles, for all technical and nontechnical roles involved in an AI system’s lifecycle across design, development, deployment, and operation.
  • For supplied systems:
    Responsibly supplying (e.g. through licensing) AI systems to customers will require investment in the following capabilities:
    1. Development of audience-specific guidance and documentation for each AI system, such as for potential customers, users, or the public. This guidance and documentation should clearly articulate the AI system’s capabilities, security controls, and training methods.
    2. A legal means to clarify liability and other requirements with customers through contract addendums or amendments.
    3. Regular assessment of downstream impacts of AI products, enabled by transparency and shared learning with customers while also protecting users’ privacy and other rights.

6. Governance Structure

The following executive or senior leadership positions are designated as the sponsors of Surpass Assessment’s responsible AI approach. Executive sponsors bear responsibility for ensuring that our responsible AI strategy is developed and executed effectively and are ultimately accountable for its success.

  1. The Co-Chief Executive Officers (Co-CEOs) are jointly accountable for defining and upholding Surpass Assessment’s strategic direction for the responsible use of AI. This includes establishing the company’s responsible AI principles, setting overall risk appetite, and ensuring that AI governance is embedded across all functions and regions. The Co-CEOs provide executive sponsorship for the Compliance Group and the Community AI Advisory Board, delegate ownership of AI objectives to appropriate leaders, and ensure that AI is used to drive innovation while safeguarding trust, compliance, and customer impact.
  2. The Chief Operating Officer (COO) is accountable for ensuring that the implementation and use of AI systems align with Surpass Assessment’s operational standards and customer commitments. The COO oversees the delivery of AI-related activities across customer-facing functions and is responsible for ensuring that customer-impacting AI systems are deployed responsibly and in accordance with this policy.
  3. The Chief Information Officer (CIO) is responsible for ensuring that all proposed AI systems procured from external vendors are subject to appropriate review and governance. This includes evaluating compliance with legal, regulatory, security, and internal policy requirements prior to adoption. The CIO is also responsible for ensuring that appropriate risk assessments and procurement procedures are followed, and that AI systems are reviewed for potential operational, ethical, or data-related risks. The CIO reports to the COO and ensures that technology decisions align with broader operational and customer priorities.
  4. The Head of Product Management is responsible for ensuring that any AI powered functionality developed within the Surpass Platform adheres to Surpass Assessment’s defined Responsible AI Principles. This includes ensuring transparency of AI features through comprehensive customer documentation. The Head of Product Development is accountable for balancing innovation with governance and for ensuring that all AI-enabled features meet quality, usability, and compliance expectations before release.
  5. The Chief Architect is responsible for ensuring that built and supplied AI systems are designed and integrated in a manner that upholds Surpass Assessment’s Responsible AI Principles at the architectural level. This includes embedding responsible AI considerations into system design, data architecture, and technical standards.
  6. Surpass Service Owners are responsible for ensuring that any AI functionality introduced within their service areas complies with Surpass Responsible AI principles, contractual obligations, and relevant regulatory requirements. They are accountable for assessing the appropriateness of AI capabilities in context, coordinating with Product, Compliance, and Legal expertise as needed, and ensuring that any AI-powered features meet expected standards for performance, transparency, and user experience. Service Owners play a key role in monitoring AI functionality post-deployment and supporting ongoing risk and impact assessments within their domains.

In addition to the executive and senior leadership responsibilities defined above, the following broader responsibilities have been defined:

  1. All Managers must ensure that employees and contractors within their area of responsibility are aware of, understand, and comply with this policy. They must address any nonconformities or breaches proportionately, and in accordance with this policy and disciplinary procedures.
  2. Relationship owners must ensure that any relevant third parties within their area of responsibility are aware of, understand, and comply with this policy. They must address any nonconformities or breaches proportionately, and in accordance with the supplier management process and relevant contractual agreement.
  3.  All employees, contractors and relevant third parties are responsible for complying with this policy and all associated procedures. They must promptly report any actual or suspected non-compliance, security incidents, or vulnerabilities to their line manager or the Information Management Team.

Surpass Assessment’s responsible AI strategy shall be led by two major governance bodies:

    1. An internal Compliance Group shall provide executive leadership and oversight, including direction, mandates, and resourcing for responsible AI efforts, in a timely manner. The Compliance Group includes both UK and US Co-CEOs, CIO, Chief HR Officer and Chief Financial Officer, and will draw in additional roles as necessary.
    1. The Compliance Group shall convene once per month. Its responsibilities in relation to AI governance include:
    1. Approving purchase and subsequent regular monitoring of the performance of AI systems and ensuring alignment of purchased AI systems to the Surpass strategic objectives.
    2. Directing the development and updating of both cross-functional and function-specific AI guidance and tools alongside departmental leaders.
    3. Developing AI-related inventories.
    4. Providing responsible AI training.
    5. Ensuring the Surpass AI Policy is reviewed and updated at least twice a year and following significant changes.

    The Compliance Group will also ensure that the following additional guidance for AI systems shall be incorporated into existing compliance activities:

    1.  The Surpass Compliance Group shall review and interpret existing and emerging legal and regulatory AI requirements, including those raised by the Community AI Advisory Board. Where updates are required as a result of regulatory developments, the Compliance Group will collaborate with accountable roles (e.g. COO, Head of Product Management, Service Owners) to support necessary changes to policies, processes, or controls in line with Surpass Assessment’s AI governance structure.
    2. Surpass Compliance Group members shall receive appropriate foundational AI training to enable successful implementation of compliance organisation wide.
    3. The Surpass Compliance Group shall provide appropriate data security training that incorporates AI considerations for the workforce, with the support of Senior Leadership.

2. An external Community AI Advisory Board shall guide Surpass leadership in the adoption of AI functionality within its products and services. The Community AI Advisory Board consists of the Surpass US Co-CEO, Surpass COO, specialist legal expertise, and expert representation across key industry segments.

The Community AI Advisory Board shall convene at least once every 6 weeks. Its responsibilities include:

  1. Defining the Surpass Responsible AI principles, followed by periodic reviews.
  2. Providing guidance in the implementation of AI functionality within the Surpass services.
  3. Providing guidance to Surpass Assessment in its AI policies.
  4. Providing awareness of evolving AI laws, regulatory requirements and other external factors relating to AI within the assessment industry.

7. Regulatory Compliance

Compliance with existing data, analytics, and technology regulation at Surpass Assessment is implemented by the following policies:

  1. Data Protection Policy and Data Protection Policy Guidance
  2. Information Security Policy and Information Classification and Exchange Policy
  3. Supplier Management Policy

8. Data Management

Data management at Surpass is implemented by our existing:

  1. Data Protection Policy;
  2. Information Security Policy and Information Classification and Exchange Policy
  3. Supplier Management Policy

We are committed to meaningful data transparency in the use of AI. Additionally, Surpass identifies and aligns with existing regulations and guidelines for data management and reporting, including GDPR and the EU AI Act.

    1.  For bought systems:
      During the early stages of a project, teams shall identify, characterise, justify and document the type and quantity of data needed for an AI system. In partnership with the Compliance Group, project teams shall request the following information from the supplier:
      • Data sources used to develop and train the AI system being provided (to the extent available).
      • Data captured and privately stored by the AI system.
      • Data captured and publicly stored or utilised for further training of publicly available AI models
    2. For built systems:
      Project teams shall document how data sets and AI models are used in the development of the system. In addition, Project teams shall document data used to train, validate, and test the system, and how data sets are used to support the operation of the system, and shall document details on how such data is collected, utilised, stored, and/or disposed.
    3. For systems supplied to customers:
      In partnership with compliance, sales and customer success teams, project teams shall determine and compile data documentation that is relevant, required, or requested by customers. Project teams shall clarify and establish data control and sharing requirements with the customer, including what data received or created by the system will be shared with Surpass, what customer data will be visible to us as their supplier, what data related or resulting from use of the model will be retained by us and/or used to train the system, and what data related or resulting from use of the model will be controlled by the customer.

9. Stakeholder Engagement

We will foster open communication with external stakeholders on AI activities, allowing for feedback and addressing concerns regarding AI decisions or impacts. In partnership with customer success teams and the Surpass Community AI Advisory Board, project teams shall engage external stakeholders in relation to AI systems supplied to customers, and shall:

  • Compile FAQ information and share with external stakeholders.
  • Engage with external stakeholders through user groups.
  • Openly and collaboratively share learnings from the Surpass customer community.
  • Share plans for further changes or enhancements to supplied systems.

10. Risk Management

At Surpass Assessment, we recognise that AI systems bring both opportunities and risks, requiring a proactive approach to identify, assess, and mitigate potential impacts. Our Risk Management framework ensures that all AI-related activities align with responsible AI principles, regulatory standards, and organisational values, while fostering trust among stakeholders.

1. Approach to Risk Management

We adopt a comprehensive risk management framework that includes:

  • Identification: Systematically identifying risks across all stages of the AI lifecycle, including risks to individuals, groups, and societal impacts.
  • Assessment: Measuring risks using standardised tools and methodologies, such as the EU AI Act Framework for Risk Impact Assessment “FRIA” (once available), to evaluate likelihood, severity, and impact.
  • Mitigation: Developing targeted strategies to address risks, including safeguarding against potential harms and ensuring system trustworthiness.
  • Monitoring and reassessment: Continuously monitoring AI systems post-deployment to track risks and adapt to our evolving environment.

2. Risk Tolerance and boundaries:

  • We align risk tolerance thresholds with industry best practices and the EU AI Act to ensure responsible and compliant use of AI.
  • AI systems posing unacceptable risks (e.g., manipulation, discriminatory practices) are strictly prohibited.

3. Transparency and accountability

  • All risk assessments, including those conducted using the FRIA framework, are documented as part of our AI business strategy, and accessible to relevant stakeholders.
  • Where appropriate, comprehensive documentation of FRIA evaluations, decisions, and outcomes is maintained to ensure transparency, traceability, and accountability.

4. High-risk systems (based on EU AI Act definition):

  • Stringent safeguards and continuous monitoring are applied to high-risk AI systems to mitigate adverse outcomes and ensure responsible use.
  • Systems are monitored for deviations from expected behaviour or performance, enabling early identification of risks.

11. Training and Workforce Management

We recognise that the effective and responsible use of artificial intelligence (AI) depends on a skilled, knowledgeable, and diverse workforce. To support this, we are committed to providing comprehensive training, for ensuring employees are equipped to manage AI responsibly across its lifecycle.

Role-Specific Training: Training programmes will be customised to reflect specific responsibilities, such as:

  • Developers: Technical training on AI tools and models.
  • End Users and Operators: Training on system use, interpretation of AI outputs, and escalation protocols for issues.
  • Compliance Group: Training on regulatory compliance, risk assessments, and oversight.

Communities of practice: Cross-functional forums and workshops will provide opportunities to share insights, challenges, and solutions related to AI.
Adherence to principles: Employees will be trained and regularly reminded about the Surpass Responsible AI Principles, and escalation protocols where any deviation is observed.
Onboarding for AI roles: New hires will undergo onboarding that includes guidance on our Responsible AI principles, regulatory commitments, and responsible AI standards.
AI workforce needs: Human Resources will work with department heads to identify skill gaps and inform recruitment and upskilling strategies to align with the company’s AI objectives.


This Policy is not intended to, and does not create any rights for any employee, customer, supplier, competitor, shareholder or any other person or entity.

For organisations wishing to incorporate aspects of this policy into their own AI policy, please cite the Surpass Community AI Advisory Board.

 

For more information please contact informationsecurity@surpass.com.

Close Menu