top of page
Search

Bossware: The Trust Trap


A few weeks ago, one of my customers asked me what I thought about bossware. Then, almost immediately, the same theme started cropping up in my legal journals. Articles on workplace monitoring, algorithmic oversight, “productivity” tooling reframed as compliance.


But this isn’t a new problem as COVID was the turning point. The sudden shift to remote work came with an equally sudden expansion of digital monitoring. In the moment, it felt quite different. But remote and hybrid work didn’t disappear after the crisis passed. They became part of the landscape. And the surveillance that came with it has stayed too, become routine.

A recent UK survey found that a third of firms now use bossware to track staff activity, often in ways that would have felt extreme not long ago. What’s striking isn’t only the uptake, but the unease around it.


Trust can’t be automated

Those numbers are basically a story about trust. Among managers who objected to bossware, 79 percent said it undermines trust, 70 percent warned it creates a constant sense of being watched, and 58 percent linked it directly to lower morale. You can feel the tension sitting inside that data: workplaces increasingly organised around metrics, dashboards and risk logic, but staffed by humans who know — in their bones — that trust doesn’t work like a software feature you can buy.


58 percent linked Bossware directly to lower morale
58 percent linked Bossware directly to lower morale


The rise in Bossware shows us that when organisations feel anxious, they reach for oversight. When leaders don’t know what’s happening, they want more data. When they worry about risk, they build systems to shrink the human margin for error. The logic is understandable. But it’s also corrosive.


In my previous research, I explored how public authorities adopt automated decision systems not out of malice, but out of a paternalistic desire to manage risk and correct for human fallibility. Yet that impulse erodes the psychological foundations of privacy and autonomy. Surveillance, even when rationalised as benign or necessary, shifts the relationship between watcher and watched into something asymmetric and brittle. The same dynamic is taking root in the private workplace, only dressed up as performance management.


What being watched does to people

The sense of being watched alters behaviour, whether or not anyone ever “acts” on the data. It narrows the space in which we can be fully ourselves, even in the mundane rhythms of a workday.

Erving Goffman wrote that privacy allows us to manage our identities — to move between our public performance and the backstage of our inner lives. Bossware removes that boundary. The screen becomes a stage you never step off.

This is where the chilling effect lives. When you know your keystrokes, idle time or browsing patterns are logged, you become performative. You hesitate before opening a non-work tab, even for something harmless like checking the news. You draft and redraft emails in case their tone is later scrutinised. You avoid searching for union advice or mental health support on your work laptop, just in case. Over time, that low-level self-censorship spreads. People take fewer creative risks. They speak up less. They self-edit in ways that may not be obvious, but reshape the culture.

You don’t need overt punishment for this to matter. Observation alone is enough.


The hidden psychological bill

The cost of that kind of environment isn’t only ethical. It’s psychological. Constant self-monitoring is exhausting. It’s emotional and cognitive labour that rarely gets recognised. Workers learn to sit a little straighter on camera, keep the cursor moving, prove their presence to a system that is always half suspicious. It produces a subtle, grinding hypervigilance. You’re never quite offstage, never entirely unobserved, so you never fully relax.

Over weeks and months, that takes a toll. It shows up as anxiety, burnout, and a kind of disengagement that can look like compliance from the outside. People still meet targets. They still respond to emails. But they do it with less slack in their nervous system, less willingness to improvise, and less faith that the organisation is acting in good faith towards them.

Bossware can tell you a person was “active” for eight hours. It can’t tell you what it cost them to sustain that performance.

Consent isn’t real when it’s a condition of work

Underlying all of this is a basic power imbalance. When surveillance becomes a condition of employment, workers aren’t really consenting. They are a captive clientele, in much the same way citizens are captive to state systems. There isn’t a meaningful “no” available without consequences.

And just as opaque automated government systems dilute accountability, bossware often operates in ways employees neither understand nor can challenge. What is being tracked? How is it classified? Who sees it? What counts as “normal” or “risky” behaviour? The mechanism stays out of reach, yet its consequences are felt in the daily atmosphere of work.



A practical way to separate security telemetry from bossware


I don’t think the answer is “never monitor anything.” Most organisations need security controls. The question is whether those controls are tight, limited, and about genuine harm, or whether they slide into routine observation of people.


For each tool, I use the same test:


What does an anomaly look like? (Define patterns, not personalities.)

How is it flagged, and to whom? (Avoid alert fatigue turning into routine scrutiny.)
What happens automatically? (Blocks are fine for threats; automated judgements about staff are not.)
When does a human step in? (Be clear, bounded, auditable.)

Then check: necessity, proportionality, impact on reasonable expectations, and mitigation against drift.


Here’s what that looks like control-by-control.


Secure Web Gateway (SWG)

  • Anomaly: repeated hits to high-risk or newly registered domains; large uploads to unsanctioned cloud apps; repeated proxy/VPN circumvention.

  • Flagged to you: dashboard alerts + scheduled reports; critical breaches to the Helpdesk mailbox.

  • Passive response: auto-block with clear user notification and logging.

  • Human intervention: escalate only when incidents repeat within 24 hours or involve confidential data; actions logged.

  • Necessity: stops malware/data leaks before harm occurs.

  • Proportionality: focuses on categories/patterns, not full browsing histories by default.

  • Impact on expectations: filtering risky sites is expected; routine person-level review is not.

  • Mitigation: aggregated reporting first; identity revealed only on repeat/high-risk thresholds; short retention.


HTTPS/SSL Decryption

  • Anomaly: sites failing SSL inspection or generating repeated exceptions.

  • Flagged to you: “SSL Bypass Report” in analytics.

  • Passive response: log exceptions silently; auto-allow sensitive services (NHS, Microsoft, finance/banking) via a documented exception list.

  • Human intervention: quarterly review of exemptions; investigate repetitive failures as possible evasion.

  • Necessity: encrypted traffic is where modern threats hide.

  • Proportionality: decrypt only where defined risk justifies it; maintain a living exception list for clinical/sensitive services.

  • Impact on expectations: blanket inspection exceeds normal expectations.

  • Mitigation: target decryption to risky categories; publish the exception list to IGSG/Caldicott; share a plain-English staff summary; record approvals for overrides.


Logging & Reporting / SIEM

  • Anomaly: unusual outbound connections; anomalous port use; high-volume traffic bursts.

  • Flagged to you: Scheduled reports; SIEM API feed when integrated.

  • Passive response: continuous technical telemetry collection.

  • Human intervention: weekly Helpdesk review; escalate only significant anomalies.

  • Necessity: detects incidents and meets security obligations.

  • Proportionality: collect technical metadata, not message/content.

  • Impact on expectations: central logging is expected; naming people by default isn’t.

  • Mitigation: hashed identities; “break-glass” reveal only when thresholds met; audit all access.


Chat-based GenAI Monitoring

  • Anomaly: LLM prompts containing sensitive, confidential, or personal data.

  • Flagged to you: AI module alerts.

  • Passive response: allow use but tag/log as “Potential Data Disclosure.”

  • Human intervention: weekly cyber review; confirmed disclosures to DPO/IG; proportionate follow-up and training.

  • Necessity: prevents accidental disclosure in prompts.

  • Proportionality: tag risky patterns first; pull full prompt/response only if thresholds met.

  • Impact on expectations: routine prompt review feels intrusive.

  • Mitigation: metadata-first; full content gated by human approval; staff training on safe prompting.


Zero Trust Network Access (ZTNA)

  • Anomaly: connection attempts from unmanaged/non-compliant devices.

  • Flagged to you: ZTNA access logs and posture failures.

  • Passive response: auto-deny with user prompt to contact IT.

  • Human intervention: review posture failures; allow exceptions only for legitimate service accounts/new devices; monthly RBAC audits.

  • Necessity: ensures only compliant devices access resources.

  • Proportionality: checks posture, not personal content.

  • Impact on expectations: access checks are normal on managed networks.

  • Mitigation: minimal posture signals; fast remediation; time-limited exceptions.


DNS Security / Protective DNS

  • Anomaly: frequent queries to newly observed/malicious/C2 domains.

  • Flagged to you: DNS reports.

  • Passive response: PDNS block page.

  • Human intervention: weekly DNS anomaly review.

  • Necessity: blocks known-bad domains before compromise.

  • Proportionality: domain-level control; no page content.

  • Impact on expectations: low privacy impact; false positives frustrate.

  • Mitigation: clear block pages with help links; identity unmasked only for repeat high-risk hits.


Malware Defence / IPS

  • Anomaly: beaconing activity; repeated malware-signature downloads; known IOCs.

  • Flagged to you: real-time security dashboard alerts.

  • Passive response: auto-block, isolate session, log in detail.

  • Human intervention: correlate with EDR; escalate via CIRP if required.

  • Necessity: stops malware/C2 in real time.

  • Proportionality: targets malicious indicators, not general behaviour.

  • Impact on expectations: low privacy impact; naming individuals must be justified.

  • Mitigation: confirm via EDR before contacting users; restrict sensitive detail sharing.


Device Posture Checks

  • Anomaly: missing endpoint agent; outdated OS; AV non-compliance.

  • Flagged to you: ZTNA compliance reports.

  • Passive response: deny connection until compliant.

  • Human intervention: Helpdesk remediates and confirms compliance.

  • Necessity: reduces breach risk from weak endpoints.

  • Proportionality: checks patch/agent/AV status only.

  • Impact on expectations: blocks disrupt work but are standard.

  • Mitigation: keep checks technical and minimal; rapid remediation; log only what proves compliance.


Outbound / Cloud Firewall

  • Anomaly: access attempts to restricted SaaS; high-risk geo-IP destinations.

  • Flagged to you: Cloud Firewall Events report.

  • Passive response: auto-block and log.

  • Human intervention: review patterns for threat intel/tuning.

  • Necessity: prevents risky SaaS/data routes.

  • Proportionality: policy + threat-intel based; not content inspection.

  • Impact on expectations: privacy impact limited; surprise blocks frustrate.

  • Mitigation: investigate at group level first; document geo/IP rules; identity only when genuinely needed.


Global Cloud Service / PoPs (service health)

  • Anomaly: latency spikes, failovers, performance degradation.

  • Flagged to you: Service health dashboard.

  • Passive response: reroute to nearest PoP automatically.

  • Human intervention: persistent latency or repeat failovers → Helpdesk + support.

  • Necessity: keeps service available and safe.

  • Proportionality: operational metrics only.

  • Impact on expectations: no privacy impact.

  • Mitigation: keep telemetry de-identified; never blend with HR/performance data.


The principle underneath all of this

Security monitoring is legitimate when it stays risk-based, system-level by default, and identity-revealing only on clear thresholds. But the technical design is only half the job. The other half is psychological. People relax when they know where the boundaries are. If staff understand what you are monitoring — and, just as importantly, what you are not monitoring — it takes the edge off that low-level “always being watched” feeling that corrodes morale.


Clear rules, reasonable expectations, and honest communication don’t just protect rights on paper — they protect the trust and breathing room people need to do good work.


Emma Kitcher, Privacy Nerd
Emma Kitcher, Privacy Nerd



 
 
 
00011-2939233035.png

DID YOU FIND THIS USEFUL?

Join our mailing list to get practical insights on ISO 27001, AI, and data protection; No fluff, just useful stuff.

You can unsubscribe at any time. You are welcome to read our Privacy Policy

bottom of page