top of page
Search

Smart Tools, Safe Use: Virtual Assistants in the Workplace

ree

Artificial Intelligence is changing the way we work and one of the most visible tools is the AI Virtual Assistant. Systems such as ChatGPT can generate text that looks and sounds human, answer questions, support decision making, and even create draft materials. Used in the right way, they can boost efficiency and help us think differently, but without clear boundaries they also bring risks to privacy, accuracy and trust.


What Are AI Virtual Assistants?

AI Virtual Assistants are computer programs that understand and produce language. They can be used to draft communications, suggest solutions, create training materials, or support research. Their speed and adaptability are useful, but they also raise ethical, legal and social questions that every organisation needs to take seriously.


Appropriate Use in the Workplace

Clear rules on how to use these tools safely are essential. AI assistants are best suited for internal and non critical tasks such as:

  • Drafting emails, briefings or policies for colleagues.

  • Supporting training and development work.

  • Helping with general research or brainstorming.

Staff must always check and refine AI generated content to make sure it is accurate and good quality.


Restrictions and Prohibited Use

We always advise our customers that AI Virtual Assistants should not be used for:

  • Formal communication with clients or stakeholders unless explicitly approved.

  • Sensitive data such as personal, financial, or confidential company information.

  • Protected or copyrighted materials.

Boundaries like these reduce legal and reputational risks while protecting employees, customers and partners.


The Problem of Accuracy

A major issue with these systems is that they sometimes provide incorrect or completely fabricated information. For example, two New York lawyers were fined after submitting a legal brief that included fake case citations generated by ChatGPT.

This shows why fact checking and human oversight are essential whenever AI tools are used. Writing clear and specific prompts will improve results, but nothing replaces professional judgement.


Prompt Like a Pro

ree

Be specific and clear: State exactly what you want, including context and detail.


Use examples: Show the format or style you expect in the response.


Iterate and refine: If the first answer is not right, rephrase your request or ask follow-up questions.


Provide context: Explain the scenario or purpose so the AI understands what you need.


Specify the format: Say if you want a list, a paragraph, or a step-by-step explanation.


Control length: Indicate whether you need a short summary or a detailed response.


Correct and guide: Give feedback if the answer is off track and steer the AI toward what you need.


Avoid ambiguity: Use precise language to reduce misunderstandings.


Think ethically: Use inclusive and respectful language in your prompts.


Legal and Ethical Considerations

AI use must always comply with the law, respect intellectual property rights and avoid bias or discriminatory language. We advise customers to ensure that colleague, client or service user names are never be entered into the system and care must be taken with AI outputs to avoid copyright or attribution issues.


Applications for Software Engineers

AI Virtual Assistants can help software development by generating code, producing documentation or suggesting improvements. However, they can also introduce risks such as security weaknesses, biased training data or inefficient code. Developers should use AI as a support tool, not a replacement, and must always review the output carefully.


Quality and Oversight

For AI to be used safely in the workplace, human oversight is vital. This means:

  • Fact checking and editing all outputs.

  • Refining tone so content sounds natural and appropriate.

  • Checking that the language is right for the audience.

  • Auditing use of AI tools and providing staff training.


Reporting and Accountability

Any misuse, concerns or incidents involving AI Virtual Assistants should be reported promptly to the Information Security or Data Protection lead. Everyone has a part to play in ensuring that these tools are used responsibly.


Summary

AI Virtual Assistants are an exciting opportunity to improve productivity and creativity at work. To make the most of them we need to set clear rules, keep human oversight at the centre, and always consider the ethical and legal context. Used in this way they can support our work while protecting our people, our reputation and our values.


Emma Kitcher, Ethical AI Nerd
Emma Kitcher, Ethical AI Nerd

 
 
 

Comments


00011-2939233035.png

DID YOU FIND THIS USEFUL?

Join our mailing list to get practical insights on ISO 27001, AI, and data protection; No fluff, just useful stuff.

You can unsubscribe at any time. You are welcome to read our Privacy Policy

bottom of page