Responsible Use of AI Tools in Line with Koç University Policies
The following principles apply to all members of the Koç University community:
Transparency: The use of AI tools must be clearly acknowledged in any text, presentation, or publication.
Ethical Responsibility: Users are personally responsible for the accuracy, academic validity, legal compliance, and ethical implications of content generated by AI systems.
Privacy and Security: Sensitive, personal, or identifiable information must never be entered into AI systems.
Academic Integrity: Contributions made through AI must be properly attributed, and any use that may compromise the learning process should be strictly avoided.
You can access more detailed information and institutional guidelines through the links below:
For any questions or further assistance, please contact the IT Directorate.
Data Security
Koç University considers data security, personal data privacy, and the protection of institutional information assets as fundamental priorities in the development and use of AI systems.
This article outlines the information security, data privacy, and legal compliance obligations related to the use of AI systems across all academic, administrative, and research processes.
Data Security Principles and Guidelines
1. General Security Standards
All data used in AI systems must be processed in accordance with Law No. 6698 on the Protection of Personal Data (KVKK), the University's Information Security Policy, Data Classification Policy, and other relevant regulations.
Data processing activities must always adhere to the principles of purpose limitation, data minimization, accuracy, timeliness, and limited retention periods.
2. Technical and Administrative Security Measures
To ensure data security, the following technical and administrative measures should be implemented to the extent possible:
Data anonymization and pseudonymization (use of masked data),
Data encryption, secure data transmission, and maintenance of log records,
Authorized user access and identity authentication processes,
Penetration testing, network firewalls, and monitoring systems,
Tracking and reporting of data security incidents.
3. Institutional Infrastructure and System Security
Under no circumstances should personal data or confidential/top secret institutional data be entered into external or unapproved AI systems.
Only University-approved Secure AI Systems by the IT Directorate should be used.
4. Use and Restrictions on Personal Data
Sensitive personal data (such as health information, ethnic origin, political opinions, etc.) must not be processed using external AI tools.
The processing of confidential or top secret institutional data is only permitted within approved systems and when predefined conditions are met (such as legal basis under KVKK, informed consent, or ethics committee approval).
5. Data Transfers to Third Parties
Any personal data may only be shared with third parties based on a legal basis and clear notification. When required, explicit consent must also be obtained.
All data transfer activities must be secured through appropriate data processing agreements.
6. Notification and Consent Procedures
In research or practice-based activities, data subjects must be clearly and understandably informed, and an informed consent process must be implemented.
Individuals must be given the right to withdraw their consent at any time, and these processes should be structured to be easily accessible.
7. Information Classification and Access Control
All institutional information assets must be classified. Confidential and top secret information should only be accessible to authorized staff and must not be input into AI systems.
8. Preventing Improper Use
The following types of data must not, under any circumstances, be used with publicly accessible AI systems:
Personal health data
Information classified as confidential or top secret under institutional data policies
Student grades, health records, personnel files, exam contents, strategic documents, etc.
In processes such as human resources or academic evaluation, final decisions must not be delegated to AI systems; meaningful human oversight must always be maintained.
9. Compliance Monitoring and Incident Reporting
Any suspected violations of information security must be immediately reported to: bilgiguvenligi@ku.edu.tr
Where necessary, disciplinary proceedings may be initiated.
Additional Considerations
1. AI System Approval Process
Before any new AI system is deployed, it must undergo a risk-based assessment conducted by the Information Technologies Directorate (ITD) and the AI Governance Committee. The system’s technical, ethical, and legal compliance must be verified.
2. Accountability
All users are directly responsible for the development, use, and accuracy of the outputs produced by AI systems.
Accountability mechanisms such as audit trails, log records, and defined roles must be maintained and followed.
3. Training and Awareness
Koç University provides regular training and awareness programs on data security, information privacy, and ethical use of AI systems for all stakeholders.
Participation in these programs is mandatory for specific roles.
4. Sanctions and Misconduct
Violations of the policies outlined here may result in legal and administrative consequences under applicable regulations.
The following violations, in particular, are subject to serious disciplinary action:
Unauthorized sharing or processing of data
Entering personal data into external AI systems
Use of data in violation of ethical principles or without explicit consent
Generation of uncontrolled content that compromises data security
In cases of severe violations, appropriate legal, administrative, and disciplinary processes will be carried out in accordance with:
Law No. 6698 on the Protection of Personal Data (KVKK)
Law No. 5846 on Intellectual and Artistic Works (FSEK)
Koç University’s relevant disciplinary regulations
In such cases, the Koç University Personnel Disciplinary Regulation will be applied. Where necessary, actions may include suspension from duty, disciplinary investigation, or referral to legal authorities.
Contact and Support
Subject | Contact Email |
---|---|
Reporting a Data Security Incident | |
Approval for AI System Usage | |
Ethical and Governance Inquiries |
Disclaimer
The licensed software and services provided to Koç University employees for the use of artificial intelligence (AI) tools are made available under a pilot program. Within this scope:
The availability and features of these tools depend on the current technological infrastructure, licensing policies, and terms of use set by the respective service providers.
Quotas, functionalities, usage limits, and access periods may be updated by the service providers or the University at any time.
Accordingly, Koç University makes no guarantees regarding the continuity, availability, or performance of these tools and accepts no liability in this regard.
Additionally, during the use of these AI tools:
Users are legally and institutionally responsible for ensuring compliance with data security, personal data protection, copyright, and information confidentiality principles.
Under no circumstances may Koç University staff, students, or affiliates upload University-owned sensitive, confidential, or top secret data (e.g., research data, student information, strategic documents) to any AI tool using personal accounts (e.g., @gmail.com). Any breach of this rule will be considered the sole responsibility of the user. In the event of any data leakage, loss, or unauthorized access resulting from such use, the individual will bear all legal, administrative, and criminal liability. The user is deemed to have accepted full responsibility for any consequences and agrees to compensate the University for any damages incurred.
In all cases, the primary responsibility lies with the user. Koç University shall not be held liable for any legal, administrative, or disciplinary consequences arising from the use of these tools.
By using the tools, users are deemed to have accepted the above conditions and are expected to exercise due diligence accordingly.