[ITU] In times of crisis when confronted by new, inadequately understood threats, rumours and conspiracy theories are quickly born and quickly multiply.
The IWF says new figures claiming at least 300,000 people in the UK pose a sexual threat to children is a “terrifying escalation” in the battle to keep children safe online.
The National Crime Agency (NCA) has today (3 March) revealed it believes there are a minimum of 300,000 individuals in the UK posing a sexual threat to children, either through physical “contact” abuse or online.
In recent weeks, as schools, businesses, support groups and millions of individuals have adopted Zoom as a meeting platform in an increasingly remote world, reports of “Zoombombing” or “Zoom raiding” by uninvited participants have become frequent.
While those incidents may have initially been regarded as pranks or trolling, they have since risen to the level of hate speech and harassment, and even commanded the attention of the F.B.I.
Women and girls face a “growing crisis” of online harms, with sexual harassment, threatening messages and discrimination making the web an unsafe place to be, Sir Tim Berners-Lee has warned.
The inventor of the world wide web said the “dangerous trend” in online abuse was forcing women out of jobs, causing girls to skip school, damaging relationships and silencing female opinions, prompting him to conclude that “the web is not working for women and girls”.
Legislation announced on Thursday aimed at curbing the spread of online child sexual abuse imagery would take the extraordinary step of removing legal protections for tech companies that fail to police the illegal content. A separate, international initiative that was also announced takes a softer approach, getting the industry to voluntarily embrace standards for combating the material.
The two measures come as tech companies continue to detect an explosion of abusive content on their platforms, and amid complaints that neither Congress nor the companies have been aggressive enough in stopping its spread. An investigation last year by The New York Times found that many companies knew about the problem but failed to quash it, despite having the tools to do so, and that the federal government had not been adequately enforcing a previous law meant to stem the abuse.
U.S. regulators are preparing to take fresh aim at Facebook, Google and other tech giants this week, unveiling new efforts to combat online content that harms or abuses children — and hold Silicon Valley responsible for its spread.
The heightened activity in Washington reflects the government’s simmering frustration with Silicon Valley, along with a growing appetite to rethink decades-old federal laws that spare profitable, popular tech platforms from being held liable for dangerous content that goes viral on their services.
More than 250 Australians have spent more than $1.3 million to watch child sexual abuse, live streamed on the internet from the Philippines, over 13 years.
The Australian Institute of Criminology (AIC) compiled the data in a landmark study of criminal behaviour online and found the majority of the Australians paying for what has been dubbed “webcam child sex tourism” were aged in their 50s and 60s.
More than half had no criminal record and were from a range of occupations. They included aged care workers, gardeners and even one housewife.
Imagine you find yourself in a world of infinite possibilities, a world with new friends to be found, new games to play and new ideas to explore. But how can you be sure you’re safe, how can you trust people you can't see, how do you know what you're learning is true? How can the Internet be safe?
Every year the Council of Europe joins others around the world in celebrating Safer Internet Day on the second day of the second week of the second month – in 2020, that's Tuesday February 11th. It's a day when millions of people get together to promote a safer and better Internet, where everyone feels able to use technology responsibly, respectfully, critically and creatively.
Because we bring together 47 countries from every part of Europe, the Council of Europe is in a powerful position to help create a change for the better. And because our work is to defend human rights, democracy and fairness, we saw a long time ago that we needed to use our experience and expertise to make sure the online world also respected everyone in the same way.
We work with many international partners on building strong internet governance ; we drew up the world’s first international legal text to stop crime on line ; we even launched the first international convention to protect data back in 1981- before the Internet became the main way to carry out day to day transactions.
For families, educators and policy-makers we have launched the Internet Literacy Handbook, which goes hand in hand with the Guide to Human Rights for Internet Users. Last year, we drew up a set of ground rules to make sure children are kept safe in the digital environment. Even young children can learn basic Internet safety rules with an online game “Through the Wild Web Woods”.
So we celebrate Safer Internet Day on one day every year. And every other day we work just as hard to make the Internet truly safe.
A fifth of high school-aged New Zealanders have been exposed to material about self-harm online, and almost as many to content about ways to commit suicide and to become very thin, new figures show.
The research generated calls by online safety advocates for broader approaches to youth mental health, rather than banning or censorship, and reignited a debate about whether seeing posts about self-harm online makes teenagers more likely to hurt themselves.
NZ kids exposed to concerning online material
Teens view suicide methods, violent images, hateful content and ways to be thin Almost half of New Zealand teenagers have been exposed to potentially harmful online content – including self-harm and suicide material, according to new Netsafe research
And a quarter of New Zealand children have been bothered or upset by something that happened online in the last year
The study, Ngā taiohi matihiko o Aotearoa – New Zealand Kids Online, is being released to mark Safer Internet Day, a global event with more than 50 participating countries. Netsafe is the campaign’s host in New Zealand and has enlisted a record number of supporters to join together for a better internet
Of the study’s teenage participants (aged 13-17), 36 percent said while online they had seen violent images and 27 percent viewed hateful content
Netsafe’s research shows teenagers are accessing self-harm material (20 percent) and some are even digesting “how-to-suicide guides” (17 percent). Fifteen percent searched information on “ways to be very thin”
Martin Cocker, Netsafe CEO, says the research demonstrates why whānau need to engage in regular, open, non-judgemental discussions about life online with their young people
“We live in an imperfect world where risks exist and young people will often be exposed to them on their devices away from the eyes of their parents. Younger children can be monitored and protected by parental software, but older children will choose who they disclose incidents to, and who they will seek help from,” Cocker says
Participants were questioned about who they turn to for help in the wake of an upsetting online incident. An overwhelming 69 percent chose a parent, 37 percent a friend and 17 percent a sibling. Eleven percent of children elected to speak with no one
Cocker says: “Even when they deliberately seek out content, there is still a chance they’ll be upset or even harmed by what they see.” Of the teenagers who report being exposed to potentially harmful content, 28 percent said they were “fairly” or “very” upset and that number was higher for girls (38 percent) compared to boys (18 percent)
Cocker added: “It’s often a big step for young people to seek help. If a child comes to you it is important to focus on fixing the issue and providing them with the right support to help minimise the harm they may experience
“We know from previous research that young people fear they will be punished and their caregivers will blame them for the situation they find themselves in. This is something young people believe stems from adults not taking the time to understand the online world they inhabit.” While there might be a digital technology gap between what parents know and what their child knows, adults have life skills, maturity and experience children haven’t developed
Netsafe’s Online Safety Parent Toolkit can better equip adults to talk with their children to have ongoing online safety conversations
A full copy of the Ngā taiohi matihiko o Aotearoa – New Zealand Kids Online study can be found here: https://www.netsafe.org.nz/childrens-online-risks-safety
Almost half of New Zealand teens exposed to self harm, suicide and violence online – study
Almost half of New Zealand teenagers have been exposed to potentially harmful content online, including self harm and suicide material, violence, and “hateful” content.
Social media sites, online games and streaming services used by children will have to abide by a new privacy code set by the UK's data watchdog.
Elizabeth Denham, the information commissioner, said future generations will be “astonished to think that we ever didn't protect kids online”.
She said the new Age Appropriate Design Code will be “transformational”.
The father of Molly Russell, 14, who killed herself after viewing graphic content online, welcomed the standards.
Watchdog cracks down on tech firms that fail to protect children
Technology companies will be required to assess their sites for sexual abuse risks, prevent self-harm and pro-suicide content, and block children from broadcasting their location, after the publication of new rules for “age-appropriate design” in the sector.
The UK Information Commissioner’s Office, which was tasked with creating regulations to protect children online, will enforce the new rules from autumn 2021, after one-year transition period. After which companies that break the law can face sanctions comparable to those under GDPR, including fines of up to £17m or 4% of global turnover.
Britain Plans Vast Privacy Protections for Children
Britain unveiled sweeping new online protections for children on Tuesday, issuing expansive rules despite widespread objections from a number of tech companies and trade groups.
The rules will require social networks, gaming apps, connected toys and other online services that are likely to be used by people under 18 to overhaul how they handle those users’ personal information. In particular, they will require platforms like YouTube and Instagram to turn on the highest possible privacy settings by default for minors, and turn off by default data-mining practices like targeted advertising and location tracking for children in the country.
ICO publishes Code of Practice to protect children’s privacy online
Today the Information Commissioner’s Office has published its final Age Appropriate Design Code – a set of 15 standards that online services should meet to protect children’s privacy.
Age appropriate design: a code of practice for online services
Data sits at the heart of the digital services children use every day. From the moment a young person opens an app, plays a game or loads a website, data begins to be gathered. Who’s using the service? How are they using it? How frequently? Where from? On what device?