ProPrivacy is reader supported and sometimes receives a commission when you make purchases using links on this site.

Are aid agencies putting the data of the world's most vulnerable at risk?

NGOs and charities are being warned that despite their best efforts, they may incidentally be exposing vulnerable service users, including refugees and those fleeing wars, to privacy and security flaws.

The new report, which has been published by Privacy International (PI), was commissioned in collaboration with The International Committee of the Red Cross (ICRC). 

In the study, the two charitable organizations describe a “do no harm” principle designed to help humanitarian bodies better understand how surveillance currently impedes or threatens “the neutral, impartial and independent nature of humanitarian action.” 

According to Privacy International, the manner in which the humanitarian sector currently uses digital and mobile technologies may be having a detrimental effect on the people receiving humanitarian aid.

humanitarian data problems

The humanitarian metadata problem

Over the past 10 years, digital technology has produced great advancements in the field of humanitarian action. Whether it be smartphones or social media - the added connectivity that technologies produce have permitted for massive improvements in the dissemination of information and heightened levels of communication and organization. 

Nowadays, aid agencies use technology for all manner of things including coordinating responses, communicating with staff and volunteers, and engaging with the people they help. However, alongside the grassroots improvements to how humanitarian aid is delivered, concerns are being raised about the digital footprints that these technologies leave behind. 

According to the study, governments, private bodies, and intelligence agencies are showing a lot of interest in the paper trails created by humanitarian projects. The study suggests there is a danger that those third parties are using that data for their own agendas. Meaning that potentially there is a high probability that the data generated by humanitarian projects is “undermining the impartiality, neutrality and independence” of aid organizations' work.

Lack of protection 

As is always the case with data, the problem arises from the ability to make secondary inferences. The joint study reveals the dangers posed by metadata and the ways it can be used to infer extremely sensitive details - such as someone’s “travel patterns or religious beliefs”

The research goes on to shed light on the lack of legal protections extended to metadata, and how that tends to result in a lack of transparency and accountability around its generation, collection, and use by governments and industry. 

What’s more, because of the manner in which metadata can be harvested and passed around, there are concerns surrounding the regulation and differing protections afforded to metadata in varying jurisdictions. The research points out that this can lead to radical differences in how data is processed around the globe.

ICRC Logo

Do no harm

PI and ICRC hope that their study will serve as a warning. They want charities to consider how they currently utilize technology to share “info-as-aid” over messaging apps and social media; or to run projects such as cash transfer, data analysis, and fraud prevention programmes.  It is their belief, must seek to ground their activities in what the study describes as principles of “do no harm.”

“The humanitarian community must better understand the risks associated with the generation, exposure processing of metadata. This is particularly important for that enjoy certain privileges and immunities but that are not able to counter these risks alone.” 

“Specifically, humanitarian need to better understand how data and metadata collected or generated by their programmes, for humanitarian purposes, can be accessed and used by other parties for non-humanitarian purposes (e.g. by profiling individuals and using these profiles for ad targeting, commercial exploitation, surveillance, and/or repression).”

The study uses the example of an individual who has registered for a cash-transfer programme. It explains that the financial institution implementing programmes could use the data they amass to categorize individuals as “non-trustworthy borrowers”. Thereby resulting in limited future access to financial services. 

The study explains that there are countless ways in which humanitarian aid projects provide can backfire, leading to future problems for their intended beneficiaries.

An ongoing problem

This isn’t the first time that PI has shed light on the dangers of technology in the humanitarian sector. A study published in 2013 called Aiding Surveillance concluded that the adoption of technologies and “data-intensive systems” such as biometric identification schemes - were facilitating the repression and surveillance of the at-risk groups they were supposed to help.

Another study published in 2017 by the International Data Responsibility Group warned that data about persecuted groups "could be used to target those individuals."

PI and ICRC are now hoping that their new study will help to educate aid agencies about how they can minimize the traces and digital footprints created by their programmes. Gus Hosein, Executive Director of Privacy International, said:

“Humanitarian like the ICRC have an extraordinary mandate of ensuring humanitarian protection for people across the world. It is essential that their tools, which are often provided by companies, support their efforts rather than undermine them. This way, the very people they are assisting will not be at risk of exploitation and abuse.”

Improvements and solutions

So, what does the study suggest charities do to improve the privacy and security of their services? Primarily, the study encourages humanitarian organizations to become more self-aware about how the technologies they use produce metadata. With this in mind, the study suggests aid programmes should seek to “improve data and tech literacy among staff, volunteers and crisis-affected people.”

The study also encourages organizations to consider how communications are being sent and the level of encryption (or lack thereof) being used. The study concludes that wherever possible End to End (E2E) encryption should be employed.

Where individuals are benefiting from Cash-Transfer Programmes, the study sets out detailed recommendations designed to reduce the possibility of discrimination and persecution. Finally, the study goes into great detail about the dangers posed by social media and the way that data can be harvested and used against crisis-affected individuals.

Talking about the data created by social media platforms, Ulrich Mans, co-founder of the Dutch organization (a Dutch organization that helps aid agencies to implement tech innovations) said:

 "I'm really hoping that we don't have our Cambridge Analytica moment.” 

Unfortunately, it would appear that there is already enough data circulating about at-risk people for the "Cambridge Analytica moment" to have already occurred. As is always the case - where there is data there are mouths feeding at the trough. 


Image credits: serato/Shutterstock.com 

Written by: Ray Walsh

Digital privacy expert with 5 years experience testing and reviewing VPNs. He's been quoted in The Express, The Times, The Washington Post, The Register, CNET & many more. 

0 Comments

There are no comments yet.

Write Your Own Comment

Your comment has been sent to the queue. It will appear shortly.

Your comment has been sent to the queue. It will appear shortly.

Your comment has been sent to the queue. It will appear shortly.

  Your comment has been sent to the queue. It will appear shortly.

We recommend you check out one of these alternatives:

The fastest VPN we test, unblocks everything, with amazing service all round

A large brand offering great value at a cheap price

One of the largest VPNs, voted best VPN by Reddit

One of the cheapest VPNs out there, but an incredibly good service