Personal Technology – East Bay Times https://www.eastbaytimes.com Mon, 12 Dec 2022 16:49:47 +0000 en-US hourly 30 https://wordpress.org/?v=6.1.1 https://www.eastbaytimes.com/wp-content/uploads/2016/10/32x32-ebt.png?w=32 Personal Technology – East Bay Times https://www.eastbaytimes.com 32 32 116372269 Jill On Money: Year-end money moves — 2022 https://www.eastbaytimes.com/2022/12/12/jill-on-money-year-end-money-moves-2022/ https://www.eastbaytimes.com/2022/12/12/jill-on-money-year-end-money-moves-2022/#respond Mon, 12 Dec 2022 16:00:30 +0000 https://www.eastbaytimes.com/?p=8686745&preview=true&preview_id=8686745 Consumers, businesses, and investors are looking forward to putting 2022 in the rear-view mirror, as soaring prices, rising interest rates and dreadful financial markets have wreaked havoc on pocketbooks.

While you may not be able to control any of those big issues, this is the time of year where I encourage you to be proactive, especially in light of the changes that are around the corner in 2023.

Those changes are primarily due to the inflation adjustments within the tax code for tax year 2023. The IRS announced increases for the standard deduction, new ranges of income to which existing marginal tax rates apply, increases to the Earned Income Tax Credit, contributions to health flexible spending arrangements, and the annual exclusion for gifts, to name a few.

Additionally, the annual limit on contributions to employer-based retirement plans will increase to $22,500, SIMPLE IRAs will rise to $15,500 and catch-up contributions for those over 50 will increase to $7,500 (up from $6,500) for 401(k) plans, 403(b) contracts, 457 plans, and SARSEPs, and to $3,500 (up from $3,000) for SIMPLE plans and SIMPLE IRAs.

Got it? Good…now let’s do some year-end planning!

Think about 2022 taxes NOW

Use the IRS’s withholding estimator to see if you have had enough money set aside to pay your tax bill in April. If not, notify your payroll department to increase your withholding through the end of the year. If you are not working or are self-employed, you may want to make an estimated tax payment to reduce or eliminate potential tax penalties.

Slash your tax bill with Uncle Sam’s help

The best way to reduce your tax liability is to maximize your pre-tax retirement plan contributions before the end of the year. Most employer plans allow you to increase your contributions but be sure to readjust after the New Year.

Consider a Roth conversion

If you had lower income in 2022 or the value of your traditional IRA is down, it may make sense to convert to a Roth IRA. When you do so, the amount that you convert will add to your taxable income.

Considering that tax rates are historically low, paying the tax due now may be among the smartest decisions you could make over the long term. Once you convert to a Roth, the money will grow tax-free and when you retire and withdraw it, there will be no tax due. Because Roth plans are not subject to Required Minimum Distributions (RMDs), many use them to help control future taxation of Social Security benefits and/or increased costs of Medicare, which are income tested.

Down markets don’t impact RMDs

The IRS does not care that the value of your retirement accounts is down — you still must take your RMD before the end of the calendar year, or else you will pay a whopping penalty.

Embrace your losers

It has been a rough year for investors, but Uncle Sam may help assuage your suffering. If you have a taxable investment account, you can sell losing positions and use those losses against sales of winning positions. If you have more losses than gains, you can deduct up to $3,000 of losses against ordinary income. If you have more than $3,000 of losses, you can carry over that amount to future years.

When you reinvest the proceeds of these sales, be mindful of the IRS’ “Wash Sale” rule, which won’t let you deduct a loss if you buy a “substantially identical” investment within 30 days. To avoid the rule, wait 31 days, and then repurchase the stock or fund you sold, or replace it with something that is close but not the same (hopefully something cheaper, like an index or an exchange-traded fund!)

Jill Schlesinger, CFP, is a CBS News business analyst. A former options trader and CIO of an investment advisory firm, she welcomes comments and questions at askjill@jillonmoney.com. Check her website at www.jillonmoney.com.

]]>
https://www.eastbaytimes.com/2022/12/12/jill-on-money-year-end-money-moves-2022/feed/ 0 8686745 2022-12-12T08:00:30+00:00 2022-12-12T08:49:47+00:00
Jill On Money: As housing market cools, will prices follow? https://www.eastbaytimes.com/2022/11/07/jill-on-money-the-housing-market-cools-will-prices-follow/ https://www.eastbaytimes.com/2022/11/07/jill-on-money-the-housing-market-cools-will-prices-follow/#respond Mon, 07 Nov 2022 09:00:28 +0000 https://www.eastbaytimes.com/?p=8654958&preview=true&preview_id=8654958 With inflation stubbornly high, the Federal Reserve is pumping the breaks on the economy by raising interest rates. The reasoning is when interest rates rise, demand wanes, activity slows, and prices start to moderate — and then eventually fall.

Easy, right?

Unfortunately for the central bank, inflationary cycles are tough to break, and rising rates take a while to filter through the economy. (Separately, the Fed can do little to easy supply constraints, but those do appear to be loosening.)

Behavior in the residential real estate market may be the Fed’s best hope for a soft landing (meaning a slowdown which avoids a full-blown recession) for the economy.

To recap, amid the pandemic, a deluge of buyers seeking more space and armed with cheap mortgages, rushed into the housing market. With inventory levels low and activity high, prices soared.

That scenario played out in the broader economy, as consumers unleashed their pent-up demand and drove prices higher, first in the goods part of the economy and now in the services side.

While the Fed does not control longer term interest rates associated with most mortgages, all rates have been increasing. A year ago, a 30-year fixed rate mortgage was just over 3% (near the all-time low); today, it has more than doubled to almost 7%, near a 20-year high.

At last year’s 3.2% rate, the monthly payment for a $400,000 house, with 20% down and a 30-year fixed rate was $1,384 for principal and interest; today, the cost increases to $2,130. Put another way, the buyer that could afford a $450,000 house a year ago, must drop down to $345,000 because of rate increases.

Higher rates and prices have put the recent real estate acceleration into neutral. According to Redfin, “Housing-market activity is plunging further this fall than it did over the summer as mortgage rates near 7%…Price drops have reached a record high, and home sales and new listings are dropping.” The National Association of Realtors (NAR) reported Existing Home Sales slid in September and are down 23.8% from a year ago.

The situation is impacting both buyers and sellers, with the former forced to remain on the sidelines amid a competitive rental market, and the later who are unwilling to list their homes and give up their low mortgage rates, contributing to a decline in new listings (down 17% from a year ago).

While home prices are not dropping precipitously, they are decelerating. In September, the median existing-home price was $384,800, an 8.4% increase from a year ago ($355,100), but down from the record high of $413,800 in June.

Sam Hall of Capital Economics expects that prices overall will fall by 8% from the June peak over the next year. There is more evidence that the real estate frenzy is abating: fewer homes are selling above their list price; seller price drops are increasing, and the time of a home staying on the market is rising to a median of 33 days, “up more than a full week from 25 days a year earlier and the record low of 17 days set in May and early June,” according to Redfin.

The Fed is likely hoping that the housing market slowdown will echo across various parts of the economy.

If so, the central bank just might get its soft landing. Then again, considering that residential investment is a large part of the nation’s economy, any significant slowdown in the housing market could also increase the risk of a recession in the coming year.

The Fed’s window of opportunity is closing quickly.

Jill Schlesinger, CFP, is a CBS News business analyst. A former options trader and CIO of an investment advisory firm, she welcomes comments and questions at askjill@jillonmoney.com. Check her website at www.jillonmoney.com.

]]>
https://www.eastbaytimes.com/2022/11/07/jill-on-money-the-housing-market-cools-will-prices-follow/feed/ 0 8654958 2022-11-07T01:00:28+00:00 2022-11-07T01:00:38+00:00
Larry Magid: Staying connected during summer travel https://www.eastbaytimes.com/2022/07/07/larry-magid-staying-connected-during-summer-travel/ https://www.eastbaytimes.com/2022/07/07/larry-magid-staying-connected-during-summer-travel/#respond Thu, 07 Jul 2022 15:00:08 +0000 https://www.eastbaytimes.com?p=8530355&preview_id=8530355 If you’re planning a summer trip, you need to do more than just figure out a way to pay for your ever increasing airline tickets, hope your flights aren’t cancelled, take out a second mortgage to finance your gasoline or, if you have an electric car, figure out where you can charge it.

Larry Magid
Larry Magid 

You also need to think about connectivity.

Your first thoughts about connectivity should be whether you even want it. You probably do want to be able to make and receive calls (hopefully not work related), and if you’re driving, have access to your navigation app. But, if you’re trying to get away from it all, you might want to take a holiday from email and work related messaging. If so, before you leave, see if your email system can send automated “out of office replies” so people who write you know not to expect an immediate answer. Also, consider disabling any work related messaging apps you might have or at least turning off those app’s ability to send you notifications. Several years ago, my son and daughter booked a family vacation at a Mexican resort that had no cellular or internet access to force me to get off the grid. It was hard at first, but after a couple of days, I started to appreciate being unplugged from the world.

If you need internet, make sure you have a plan that won’t cost you a fortune. If you’re traveling within the United States, it’s probably not an issue, but if you’re leaving the country, be sure to see if there are roaming fees, which can sometimes add up to hundreds of dollars if you’re not careful. Most carriers charge a lot extra for text messages and incoming and outgoing calls when you’re out of the country, but they typically offer roaming packages that can reduce or eliminate those costs. AT&T, for example, has a $10 a day “international day pass” that gives you unlimited talk, text and data in more than 210 countries. $10 a day can add up but not nearly as much as roaming charges. If you don’t need to have people call you on your number, you can purchase SIM cards in countries you visit. It’s a bit of a hassle though well worth it if you’re spending a lot of time in that country.

If you don’t have an affordable roaming plan, make sure you turn off data, messaging and any other services you don’t want to pay for. I’ve heard horror stories from people who forgot to turn off data and were billed for automatic downloads that they didn’t even realize were happening.

Even if you don’t have an international plan, you can usually get internet access and, perhaps, text and phone calls via WiFi when you’re at a hotel or other hotspot. That can also work if you’re in the country but away from a cellular signal. My wife and daughter recently spent a few days at an Airbnb near the Russian River with WiFi but no cellular access. Before she left, I configured her phone to make and accept calls via WiFi (available on most newer phones) so she was able to make and receive calls and texts as soon as she logged onto the house’s WiFi network.

Sometimes you wind up at a place that doesn’t have WiFi or it’s unreliable or expensive to use. If there is a cellular signal, you can get your email or access the web from your phone, and if you want to use a laptop, you can probably tether your laptop to the phone via Wi-Fi, Bluetooth or USB so that you’re using your cellular plan for internet access. Before you do, make sure it’s affordable. You might want to upgrade to an unlimited data plan to avoid what could be high data charges. I have an unlimited data plan and often use my cellular WiFi  when I’m away from home, even if there is an available WiFi network. It’s less of a hassle, and it’s a lot more secure than logging into a public Wi-Fi network. Frequently it’s also faster and more reliable than a crowded public WiFi network.

There are options for when you’re away from any type of land-based signal. If you Google “Portable satellite Internet” or “portable satellite phone” you’ll find devices you can purchase or rent, but be sure to check out the usage charges. If you’re going hiking and simply want to be able to seek help in an emergency or keep loved-ones informed about your whereabouts. you can purchase or rent a satellite communications device that can locate you and send out an SOS. You can Google “satellite communications device” for links to all sorts of information, including a couple of good explainer articles.

Despite my pleasant off-the-grid experience in Mexico, I do like being connected while I travel. Sometimes just to watch movies and TV on my laptop or phone but often to check out local attractions, make reservations or — especially with COVID — find food for take-out or delivery. It’s also nice to have access to Google maps and other navigation tools even if I’m not driving. They can be handy for walks or bike rides or if you’re in a taxi that charges by the mile, to make sure your driver is taking a direct route. I could have used that in Paris when my driver took me out of my way to increase the fare. Because of my very limited French, I didn’t say anything, but I thought about using Google translate to tell him what I really thought.

Larry Magid is a tech journalist and internet safety activist.

]]>
https://www.eastbaytimes.com/2022/07/07/larry-magid-staying-connected-during-summer-travel/feed/ 0 8530355 2022-07-07T08:00:08+00:00 2022-07-07T15:41:29+00:00
Opinion: Only safety standards will prod Big Tech to protect children https://www.eastbaytimes.com/2022/07/01/opinion-only-safety-standards-will-prod-big-tech-to-protect-children/ https://www.eastbaytimes.com/2022/07/01/opinion-only-safety-standards-will-prod-big-tech-to-protect-children/#respond Fri, 01 Jul 2022 12:15:09 +0000 https://www.eastbaytimes.com?p=8523058&preview_id=8523058 Just do it: It’s not that hard to make social media safer for kids.

At a recent event with teachers and doctors from across the country, a pediatric psychiatrist told me that kids have started showing up for kindergarten without the ability to throw a ball or hold a pencil. Their hands lack those abilities, in part, because they spend so much time in front of screens. It seems kids are losing the ability to participate in childhood.

Last year, I disclosed to the federal government more than 20,000 pages of internal documents from my former employer, Facebook (now Meta). Probably the most shocking disclosure was the extent to which Facebook knew its products, Instagram in particular, were harming our children and chose to do nothing about it.

The products children spend so much time with from the youngest of ages are not safe — by design. And it is at the product-design level, rather than tacked-on screen-time features, that products for our children can be made meaningfully safer.

Instagram’s own studies show that the platform worsens body images for 1 in 3 teen girls. More than 13% of teen girls say the app contributes to their suicidal or self-harm thoughts.

In response to these horrific revelations, Facebook sent Instagram CEO Adam Mosseri to defend the platform. “We know that more people die than would otherwise because of car accidents, but by and large, cars create way more value in the world than they destroy … And I think social media is similar,” he said on a podcast with Recode,

I would remind Mosseri that cars have seat belts. They have airbags. There are speed limits, and lower speeds are required in school zones. We must have infant car seats before we can take our babies home from the hospital. Those measures are in place because when we realized just how many lives could be saved with such simple yet effective changes, we acted.

Did the auto industry fight against them? Absolutely. And we are seeing the same fight today from Big Tech — with millions of dollars spent on lobbying and misleading advertising.

It is ironic that this resistance to change and innovation is coming from progenitors of innovation. An industry that has moved fast and broken things should be obligated to fix what’s broken.

There are known technological fixes that would improve safety on the platform, particularly for the most vulnerable, such as children. But company executives are unwilling to implement these solutions because they shave slivers from their billion-dollar profits.

Instead of thinking creatively and designing with safety in mind from the start, Big Tech has relied on censoring our speech and entrapping our children with predatory tactics to ensure they are scrolling for as long as possible. Worse, it has placed the blame on parents when their kids are addicted and depressed as a result.

I am a technologist, but also a pragmatist. I have worked at Facebook, Google, Pinterest and other tech companies, and the truth is that we will never regulate at the pace at which they innovate. Instead of trying to regulate the latest algorithmic innovation or clamp down on free speech, we should require safety standards in the design of the products.

The California Age-Appropriate Design Code Act moving through the Legislature is a step toward creating “seat belts” for our kids on social media. The legislation turns off features such as location tracking and prohibits the sale of kids’ personal data.

Similar legislation is now law in the United Kingdom, so we know Big Tech already is familiar with this.

These measures aren’t about banning social media for our kids. They are about creating social media that promotes the best in humanity and allows our kids to be safe, to connect with one another and learn together. That kind of social media is possible, but we have to design it that way from the start.

California is the birthplace of many of these technologies, and it is appropriate that California take the lead in designing systems that honor our freedom of speech, respect our children’s privacy and enshrine their rights to be safe online.

Frances Haugen is a former Facebook product manager and an advocate for accountability and transparency in social media. She wrote this piece for CalMatters.

]]>
https://www.eastbaytimes.com/2022/07/01/opinion-only-safety-standards-will-prod-big-tech-to-protect-children/feed/ 0 8523058 2022-07-01T05:15:09+00:00 2022-07-01T09:11:05+00:00
Larry Magid: Instagram testing age verification technology https://www.eastbaytimes.com/2022/06/23/larry-magid-instagram-testing-age-verification-technology/ https://www.eastbaytimes.com/2022/06/23/larry-magid-instagram-testing-age-verification-technology/#respond Thu, 23 Jun 2022 15:00:43 +0000 https://www.eastbaytimes.com?p=8512223&preview_id=8512223 In 2008, I served on the Internet Safety Technical Task Force, run out of Harvard Law School’s Berkman Center at the behest of 49 U.S. state attorneys general and MySpace to, among other things, determine whether there was a practical way for social media companies to determine the age of its users. There wasn’t.

Larry Magid
Larry Magid 

Fast forward to 2022. While the technology is still evolving, there are now ways to accomplish what we couldn’t do more than a decade ago. Meta, in the US, has started to test new ways to verify age, including Face-Based-Age-Prediction (FBAP) technology that can anonymously determine a person’s approximate age along with “social vouching.”

Like most social media services, Instagram requires users to be 13 or older and offers some features and content that are available only for those who are over 18 along with default settings that vary by user’s age.

Why knowing age is important

In addition to knowing whether a user is 13 or older and therefore eligible to have an Instagram account, the company needs to know if a user is under 18 or between 13 and 16 so that it can take advantage of age-appropriate privacy and safety settings and be shielded from content and features that may be inappropriate for younger users.

For example, if the service knows a teen is under 16, it will be defaulted into a private account. Users under 18 are protected from unwanted contact by making it harder for potentially suspicious accounts to find them. Instagram can also notify teens when an adult who has been exhibiting potentially suspicious behavior is interacting with them in direct messaging. Instagram age-gates branded content that is not appropriate for teens, including content about alcohol, subscription services, financial insurance products and services, and cosmetic procedures and weight loss. It also prohibits advertisers from targeting ads to people under 18 based on interest and activity on other apps and websites, though it does allow ads to be targeted based on age, gender and location.

Age matters

The safety and privacy features specifically for teens are only available if the service knows their age, but until now, the primary way to determine a user’s age was to ask them to enter in their date of birth on sign-up. Instagram has long investigated reports of people being under age, but there are plenty of children who put in the wrong birthday, claiming to be 13 or older to gain access to the service or claiming to be over 18 to gain access to adult-only features. Meta will investigate reports of people being under age and will also use artificial intelligence to detect if a user is under or over the age of 18. This AI looks at things like birthday greetings and other user-generated content.

Until now, if someone tries to edit their age from under 18 to 18 or older, they would be required to upload an ID, such as a driver’s license, passport, school ID, library card and social security card, among other options.

Face Based Age Prediction

The new menu of options will still include ID verification, but not everyone has access to an ID. Some also may prefer not to share it with Meta, so the company is adding two new options to determine a person’s approximate age — Face Based Age Prediction (FBAP) and social vouching.

FBAP is a technology developed and operated by Yoti, a UK company that offers age verification technology to companies around the world, including those that offer adult products or services such as alcohol, online gambling or adult-only content.

Social vouching is where one or more individuals (who must be adults themselves) will vouch for someone’s age. When someone selects the social vouching option, they will be given the option to verify their age by asking people they are mutually connected with to vouch for their age, requiring them to select 3 vouchers from a list of 6 provided by Instagram. A Meta spokesperson said “we ensure the trustworthiness of vouchers by using integrity signals, for example excluding accounts that have been registered very recently, accounts that are suspected to be fake, and limiting to users with an age of 18 or over. The user’s age is considered verified if all three responses match and the answer is in the exact age band that the user is attempting to change their date of birth to.”

The Yoti face estimation technology is particularly interesting in terms of its technology, effectiveness and simplicity. As Yoti explains on its website, “Users simply look at the camera on a device and have their photo taken. Our algorithm instantly estimates their age based on their face.”  

Yoti age estimation has been certified for use by government agencies in the United Kingdom and Germany for purposes including access to adult content, gambling and alcohol. 

Accuracy and inclusiveness

There are slight accuracy variations by gender. Currently they only report female and male genders (they say they are working on ways to improve age estimation for transgender individuals) and skin-tones but, overall, according to a May, 2022 Yoti white paper, the system will determine age within:

  • 1.36 years for children between 6 and 13
  • 1.56 years for teens 13-17
  • 2.22 years for young adults between 18 and 24

The system is less precise (3.47%) at estimating the age of adults over 26, but Meta and most other companies don’t need to know the precise age of anyone who is clearly an adult.

Yoti further states that gender and skin tone bias is minimized and that the true Positive Rate (TPR) for 13-a to 17-year-olds that are correctly estimated as under 23 is 99.65% while 6- to 11-year-olds correctly estimated as under 13 is 98.91%.

Privacy

Yoti only estimates age — not identity. Meta says that it only shares the user’s selfie with Yoti and that both Yoti and Instagram delete the image once the age estimation is complete. Yoti says that it will only share age with Meta or any other client company and that “The photograph is not viewed by any Yoti staff.” The company only determines estimated age and says that the image is not used to identify the person. The Future of Privacy Forum has published an infographic along with a blog post which outlines a set of principles regarding the use of facial detection.  

In an interview, Yoti’s Chief Policy & Regulatory Officer, Julie Dawson said that it “can’t’ recognize anyone and we have it audited annually” and said that their auditor certifies that they “do delete the image each time.”

Not a magic bullet

Even with age verification, it’s still important for parents to help their teens make decisions regarding what is appropriate for them, which varies on many factors, including maturity, well-being and the individual family’s values. Having conversations with your children and teens (not lectures or inquisitions) can go a long way towards helping parents understand what services their kids are using and how they are protecting themselves and help everyone in the family learn the skills to thrive in today’s connected world.

Disclosure: Larry Magid is CEO of ConnectSafely, a non-profit internet safety organization that receives financial support from Meta (Instagram’s parent company) and other technology companies

]]>
https://www.eastbaytimes.com/2022/06/23/larry-magid-instagram-testing-age-verification-technology/feed/ 0 8512223 2022-06-23T08:00:43+00:00 2022-06-23T15:50:37+00:00
Larry Magid: How to make Google search safer or more precise https://www.eastbaytimes.com/2022/06/16/larry-magid-how-to-make-google-search-safer-or-more-precise/ https://www.eastbaytimes.com/2022/06/16/larry-magid-how-to-make-google-search-safer-or-more-precise/#respond Thu, 16 Jun 2022 15:00:34 +0000 https://www.eastbaytimes.com?p=8499857&preview_id=8499857 Just about everyone uses Google to search the internet but, what some people may not know, is that Google offers lots of search options, including “SafeSearch,” which filters out links to explicit content as well as the ability to use “operators” to fine tune your search,

Larry Magid
Larry Magid 

for example, finding results from specific sites or specific types of sites like those operated by government agencies or schools, colleges and universities.

Why use SafeSearch

By default, Google accesses nearly anything on the web, including sites with sexual, graphic violence and other explicit content that is not suited for children and may be offensive to some adults or inappropriate in certain situations, like when you are at work or around other people.

As Google states on it’s SafeSearch page, “SafeSearch isn’t 100% accurate.” Based on my testing, it’s very good but no filter is perfect, which is why — even with SafeSearch turned on — it’s important for parents to monitor their children’s use of connected technology and be available to talk with and support them if they come across anything they find disturbing. It’s also important for employers to have clear rules about the type of content employs can access on company devices or even their own devices when connected to company networks at the workplace or remotely.

Also, Google SafeSearch applies only to Google searches on accounts where it’s turned on. It does not apply to other search engines or websites that might link to inappropriate content, it does not block the content if a person goes to a site drectly and, unless you’re using device or network level filters or on a network with SafeSearch locked in, it will not apply if you log out of the Google account where it’s configured. By itself, SafeSearch won’t prevent someone from accessing explicit content if they know where to find it, but it will help prevent accidentally stumbling on that type of content. There are other tools that run on devices or networks that can be used to block inappropriate content. You may even find filters on some public Wi-Fi networks.

Setting it up

If you have children under 13, you can manage their Google accounts using the Family Link app, which gives parents the ability to lock in SafeSearch for their children’s accounts. SafeSearch is turned on by default for teens under 18, but unless there are other controls in place, the teen can turn it off at any time. Parents who use Google’s Family Link App, designed for children under 13, can control their children’s search experience. For those kids, SafeSearch is on by default and only the parent can turn it off. For school accounts, SafeSearch for Google Workspace for Education accounts are controlled by school administrators who can turn on SafeSearch, which applies even when the account is being used away from school. There are also ways to lock in SafeSearch on workplace or home networks, which you can learn about at tinyurl.com/locksearch.

If you want to turn SafeSearch on (or off) on the web, start by logging into the Google account, going to Settings (lower right corner) and selecting search settings. On mobile, go to the Google app, click on your profile picture, click on settings and click on “hide explicit results.” As long as you’re not on a managed network or have a child or student account, you can change your settings at any time.

Refining your search

When you do a Google search, you get results from all types of domains including .com, .org, .gov and .edu. Often, that works out well since Google does a pretty good job of showing you what it considers relevant results, but sometimes you might want to refine your search.

Menu that pops up over search results 

There’s an easy way to refine your search to either news, videos, images, books and other options (maps, shopping, flights and finance) by selecting from the menu that appears above your search results. There are other ways to limit your search, including the use of search operators. These are little codes that you type in along with the search to fine tune your results.

For example, let’s say you wanted to find something out about Oprah Winfrey from the TIme.com website. One way to do that is to use the site operator by typing Oprah Winfrey site:time.com.  Notice that there is no space between site: and the site name.

Perhaps you’re trying to find out information about a drug, but only want government sites. You could type in synthroid site:.gov and the only information you’d see is from government sites. You could do the same with .edu for education sites, .org for nonprofit organization (though not all .org sites are non-profit) or any other top level domain. For example, if you wanted to find out what Canadians were saying about Oprah you could type in oprah winfrey site:.ca.

As Google shows in its Refine web searches page, there are several other types of operators such as putting a – in front of a word to leave it out of a search. For example, musk -elon will find references to the musk gene but not the Tesla and SpaceX CEO. If you’re looking for an exact match, put it in quotes such as “strawberry soda.”  If you’re shopping for something within a price range, you can put in the range of prices.

There are other search operators. The easiest way to find them is to search “Google search operators,” or if you want to limit your search to information from Google, type “”google search operators” site:google.com.

Disclosure: Larry Magid is CEO of ConnectSafely.org, a non-profit internet safety organization that receives financial support from Google and other companies.

 

 

 

 

 

]]>
https://www.eastbaytimes.com/2022/06/16/larry-magid-how-to-make-google-search-safer-or-more-precise/feed/ 0 8499857 2022-06-16T08:00:34+00:00 2022-06-16T10:53:45+00:00
Larry Magid: Tech connection to mass shootings https://www.eastbaytimes.com/2022/06/02/tech-connection-to-mass-shootings/ https://www.eastbaytimes.com/2022/06/02/tech-connection-to-mass-shootings/#respond Thu, 02 Jun 2022 15:00:03 +0000 https://www.eastbaytimes.com?p=8476292&preview_id=8476292 I was thinking about writing a column about the “tech angle” of the Buffalo shooting but wound up changing subjects at the last minute. Unfortunately, the topic is back in the news because of the horrific slaughter of innocent children in Texas.

Larry Magid
Larry Magid 

To be clear, technology isn’t to blame for these shootings. It was bullets — not bytes — that took the lives of 10 people in Buffalo and 21 people, including 19 elementary school students, in Uvalde, Texas. Still, some are blaming video games and social media, along with mental illness. And there are some who claim that social media is causing a breakdown in mental health that’s leading to mass shootings.

It is appropriate to revisit the data on the relationship between violent video games and aggressive behavior, but based on what we know at this point, such games are not a major factor. I’m all for increasing funding for mental health programs, but, according to the 2019 report of the Institute for Health Metrics and Evaluation, Global Burden of Disease, mental illness is the U.S. is very close to that of countries where the mass shooting rate is much lower. Australia and New Zealand, which have seen a significant reduction in gun violence, both had a higher reported rate of mental illness than the United States.

Violent video games

Some are blaming violent video games for these shootings, but as a policy document from a division of the American Psychological Association put it, “Scant evidence has emerged that makes any causal or correlational connection between playing violent video games and actually committing violent activities.” That’s not to say that such games are necessarily appropriate for all children and teens, nor does it belie data suggesting that such games can cause aggressive behavior shortly after playing, but as the late justice Antonin Scalia wrote in his majority Supreme Court decision throwing out a California law regulating video games, “They show at best some correlation between exposure to violent entertainment and minuscule real-world effects, such as children feeling more aggressive or making louder noises in the few minutes after playing a violent game than after playing a nonviolent game.”

Social media

When it comes to violence, there are four aspects of social media worth considering:

  • One is its impact on mental health in general.
  • Second is the way it can radicalize and divide people in ways that make it more likely they will attempt to harm certain groups or individuals
  • A third factor is the way social media is used to telegraph people’s intentions, perhaps serving as a warning that they might do something to harm themselves and others.
  • Fourth is the way social media is used to broadcast or celebrate violence or as a platform to share the grievances of the perpetrators.

I think it’s fair to say that social media can have an impact on mental health, but in addition to the negative impacts, there are positive ones as well, depending on how it’s used. Obsessive use of almost anything, including social media, can affect your self-esteem, especially if you’re comparing yourself with others. There is evidence of a recent increase in teen mental health issues, but this can be attributed to numerous factors, including the pandemic and social isolation. But I have seen no compelling evidence of a general increase in aggressive or violent behavior as a result of social media, except in cases where people have been radicalized or desensitized to violence as a result of participation in online forums that spread hatred or misinformation.

As per my second criterial, as the Associated Press put it, “The 18-year-old gunman accused of a deadly racist rampage at a Buffalo supermarket seems to fit an all-too-familiar profile: an aggrieved white man steeped in hate-filled conspiracies online, and inspired by other extremist massacres.” Online radicalization is a serious problem, but it’s fair to point out that there are far more young people using social media to spread messages of love, harmony and decency.

Per my third point, there are many cases where killers have signaled their intentions on social media. In some cases, these have been reported to authorities, and the crimes were prevented. In others, they slipped by either unnoticed or unreported. That’s why it’s important to say something if you see anything to suggest potential violent behavior.

As per number four, in addition to being radicalized online, the young Buffalo killer took to Twitch to live stream his attack and went online to post a 180-page  “manifesto” to share his racist and anti-immigrant views that were related to his murdering patrons of a supermarket in a predominantly black area of Buffalo.

To its credit, Twitch removed the killer’s content within minutes as is generally the goal of social networks. Still, social networks including major ones like Facebook, Twitter and YouTube, continue to be used to spread hate speech and misinformation that can lead to violence, despite the companies’ policies to ban such content and remove it when it’s discovered.

Right to moderate hanging by a thread

If the Texas legislature has its way, social media companies could be banned from removing hate speech and misinformation as well as other content they deem offensive or inappropriate. A recently passed law, HB 20, was temporally blocked by the U.S. Supreme Court, but it only put the law on hold while it’s litigated in lower courts. The law, according to an analysis by the Texas Law Research Organization, “could create an incentive for companies to not remove content that may be objectionable but not unlawful, such as bullying, misinformation, or even hate speech.” At issue is whether private companies can be held to the same standards as government agencies when it comes to an almost anything-goes approach to free speech. Yes, you are allowed to spread lies or spew racist, sexist and homophobic rhetoric on a public street, but I have the right to kick you out of my house if you do it in my living room, and so far, social media companies have that same right.

Going forward

Assuming that so-called anti-censorship laws like the one passed in Texas are ultimately struck down or repealed, social media companies will continue to have the right to enforce their rules. But I would strengthen that by saying they have the responsibility to do so when it comes to hate speech and incitement to violence. I know there are gray areas and slippery slopes whenever you restrict speech, but if I were running a company, I’d do everything I could to avoid aiding and abetting bigotry and radicalization.

But words, no matter how vile, don’t kill people, though they can inspire violence. The bullets that pulverized the little bodies of those Robb Elementary School fourth-graders came from an AR-15 style weapon that should never have been sold to an 18-year-old or, for that matter, anyone outside of the military or law enforcement. Just as the First Amendment isn’t absolute when it comes to child sex abuse images or yelling fire in a crowded theater, neither does the second amendment give the right to own any weapon that can be manufactured. if so, it would be legal to own fully automatic weapons or, for that matter, nuclear bombs, or for convicted felons to carry firearms.

Larry Magid is CEO of ConnectSafely, a non-profit internet safety organization that receives financial support from social media companies. 

 

]]>
https://www.eastbaytimes.com/2022/06/02/tech-connection-to-mass-shootings/feed/ 0 8476292 2022-06-02T08:00:03+00:00 2022-06-03T10:59:07+00:00
A face search engine anyone can use is alarmingly accurate https://www.eastbaytimes.com/2022/05/28/a-face-search-engine-anyone-can-use-is-alarmingly-accurate-2/ https://www.eastbaytimes.com/2022/05/28/a-face-search-engine-anyone-can-use-is-alarmingly-accurate-2/#respond Sat, 28 May 2022 16:04:20 +0000 https://www.eastbaytimes.com?p=8469987&preview_id=8469987 For $29.99 a month, a website called PimEyes offers a potentially dangerous superpower from the world of science fiction: the ability to search for a face, finding obscure photos that would otherwise have been as safe as the proverbial needle in the vast digital haystack of the internet.

A search takes mere seconds. You upload a photo of a face, check a box agreeing to the terms of service and then get a grid of photos of faces deemed similar, with links to where they appear on the internet. The New York Times used PimEyes on the faces of a dozen Times journalists, with their consent, to test its powers.

PimEyes found photos of every person, some that the journalists had never seen before, even when they were wearing sunglasses or a mask, or their face was turned away from the camera, in the image used to conduct the search.

PimEyes found one reporter dancing at an art museum event a decade ago, and crying after being proposed to, a photo that she didn’t particularly like but that the photographer had decided to use to advertise his business on Yelp. A tech reporter’s younger self was spotted in an awkward crush of fans at the Coachella music festival in 2011. A foreign correspondent appeared in countless wedding photos, evidently the life of every party, and in the blurry background of a photo taken of someone else at a Greek airport in 2019. A journalist’s past life in a rock band was unearthed, as was another’s preferred summer camp getaway.

Unlike Clearview AI, a similar facial recognition tool available only to law enforcement, PimEyes does not include results from social media sites. The sometimes surprising images that PimEyes surfaced came instead from news articles, wedding photography pages, review sites, blogs and pornography sites. Most of the matches for the dozen journalists’ faces were correct. For the women, the incorrect photos often came from pornography sites, which was unsettling in the suggestion that it could be them. (To be clear, it was not them.) A tech executive who asked not to be identified said he used PimEyes fairly regularly, primarily to identify people who harass him on Twitter and use their real photos on their accounts but not their real names. Another PimEyes user who asked to stay anonymous said he used the tool to find the real identities of actresses from pornographic films and to search for explicit photos of his Facebook friends.

The new owner of PimEyes is Giorgi Gobronidze, a 34-year-old academic who says his interest in advanced technology was sparked by Russian cyberattacks on his home country, Georgia.

Gobronidze said he believed that PimEyes could be a tool for good, helping people keep tabs on their online reputation. The journalist who disliked the photo that a photographer was using, for example, could now ask him to take it off his Yelp page.

PimEyes users are supposed to search only for their own faces or for the faces of people who have consented, Gobronidze said. But he said he was relying on people to act “ethically,” offering little protection against the technology’s erosion of the long-held ability to stay anonymous in a crowd. PimEyes has no controls in place to prevent users from searching for a face that is not their own and suggests a user pay a hefty fee to keep damaging photos from an ill-considered night from following him or her forever.

“It’s stalkerware by design no matter what they say,” said Ella Jakubowska, a policy adviser at European Digital Rights, a privacy advocacy group.

Under new management

Gobronidze grew up in the shadow of military conflict. His kindergarten was bombed during the civil war that ensued after Georgia declared independence from the Soviet Union in 1991. The country was effectively cut off from the world in 2008 when Russia invaded and the internet went down. The experiences inspired him to study the role of technological dominance in national security.

After stints working as a lawyer and serving in the Georgian army, Gobronidze got a master’s degree in international relations. He began his career as a professor in 2014, eventually landing at European University in Tbilisi, Georgia, where he still teaches.

In 2017, Gobronidze was in an exchange program, lecturing at a university in Poland, when one of his students introduced him, he said, to two “hacker” types — Lucasz Kowalczyk and Denis Tatina — who were working on a facial search engine. They were “brilliant masterminds,” he said, but “absolute introverts” who were not interested in public attention.

They agreed to speak with him about their creation, which eventually became PimEyes, for his academic research, Gobronidze said. He said they had explained how their search engine used neural net technology to map the features of a face, in order to match it to faces with similar measurements, and that the program was able to learn over time how to best determine a match.

“I felt like a person from the Stone Age when I first met them,” Gobronidze said. “Like I was listening to science fiction.” He kept in touch with the founders, he said, and watched as PimEyes began getting more and more attention in the media, mostly of the scathing variety. In 2020, PimEyes claimed to have a new owner, who wished to stay anonymous, and the corporate headquarters were moved from Poland to Seychelles, a popular African offshore tax haven.

Gobronidze said he “heard” sometime last year that this new owner of the site wanted to sell it. So he quickly set about gathering funds to make an offer, selling a seaside villa he had inherited from his grandparents and borrowing a large sum from his younger brother, Shalva Gobronidze, a software engineer at a bank. The professor would not reveal how much he had paid.

“It wasn’t as big an amount as someone might expect,” Gobronidze said.

In December, Gobronidze created a corporation, EMEARobotics, to acquire PimEyes and registered it in Dubai because of the United Arab Emirates’ low tax rate. He said he had retained most of the site’s small tech and support team, and hired a consulting firm in Belize to handle inquiries and regulatory questions.

Gobronidze has rented office space for PimEyes in a tower in downtown Tbilisi. It is still being renovated, light fixtures hanging loose from the ceiling.

Tatia Dolidze, a colleague of Gobronidze’s at European University, described him as “curious” and “stubborn,” and said she had been surprised when he told her that he was buying a face search engine.

“It was difficult to imagine Giorgi as a businessman,” Dolidze said by email.

Now he is a businessman who owns a company steeped in controversy, primarily around whether we have any special right of control over images of us that we never expected to be found this way. Gobronidze said facial recognition technology would be used to control people if governments and big companies had the only access to it.

And he is imagining a world where facial recognition is accessible to anyone.

‘Essentially extortion’

A few months back, Cher Scarlett, a computer engineer, tried out PimEyes for the first time and was confronted with a chapter of her life that she had tried hard to forget.

In 2005, when Scarlett was 19 and broke, she considered working in pornography. She traveled to New York City for an audition that was so humiliating and abusive that she abandoned the idea.

PimEyes unearthed the decades-old trauma, with links to where exactly the explicit photos could be found on the web. They were sprinkled in among more recent portraits of Scarlett, who works on labor rights and has been the subject of media coverage for a high-profile worker revolt she led at Apple.

“I had no idea up until that point that those images were on the internet,” she said.

Worried about how people would react to the images, Scarlett immediately began looking into how to get them removed, an experience she described in a Medium post and to CNN. When she clicked on one of the explicit photos on PimEyes, a menu popped up offering a link to the image, a link to the website where it appeared and an option to “exclude from public results” on PimEyes.

But exclusion, Scarlett quickly discovered, was available only to subscribers who paid for “PROtect plans,” which cost from $89.99 to $299.99 per month. “It’s essentially extortion,” said Scarlett, who eventually signed up for the most expensive plan.

Gobronidze disagreed with that characterization. He pointed to a free tool for deleting results from the PimEyes index that is not prominently advertised on the site. He also provided a receipt showing that PimEyes had refunded Scarlett for the $299.99 plan last month.

PimEyes has tens of thousands of subscribers, Gobronidze said, with most visitors to the site coming from the United States and Europe. It makes the bulk of its money from subscribers to its PROtect service, which includes help from PimEyes support staff in getting photos taken down from external sites.

PimEyes has a free “opt-out” as well, for people to have data about themselves removed from the site, including the search images of their faces. To opt-out, Scarlett provided a photo of her teenage self and a scan of her government-issued identification. At the beginning of April, she received a confirmation that her opt-out request had been accepted.

“Your potential results containing your face are removed from our system,” the email from PimEyes said.

But when the Times ran a PimEyes search of Scarlett’s face with her permission a month later, there were more than 100 results, including the explicit ones.

Gobronidze said that this was a “sad story” and that opting out didn’t block a person’s face from being searched. Instead, it blocks from PimEyes’ search results any photos of faces “with a high similarity level” at the time of the opt-out, meaning people need to regularly opt out, with multiple photos of themselves, if they hope to stay out of a PimEyes search. Gobronidze said explicit photos were particularly tricky, comparing their tendency to proliferate online to the mythical beast Hydra.

“Cut one head and two others appear,” he said.

Gobronidze said he wanted “ethical usage” of PimEyes, meaning that people search only for their own faces and not those of strangers.

But PimEyes does little to enforce this goal, beyond a box that a searcher must click asserting that the face being uploaded is his or her own. Helen Nissenbaum, a Cornell University professor who studies privacy, called this “absurd,” unless the site had a searcher provide government identification, as Scarlett had to when she opted out.

“If it’s a useful thing to do, to see where our own faces are, we have to imagine that a company offering only that service is going to be transparent and audited,” Nissenbaum said.

PimEyes does no such audits, though Gobronidze said the site would bar a user with search activity “beyond anything logical,” describing one with more than 1,000 searches in a day as an example. He is relying on users to do what’s right and mentioned that anyone who searched someone else’s face without permission would be breaking European privacy law.

“It should be the responsibility of the person using it,” he said. “We’re just a tool provider.” Scarlett said she had never thought she would talk publicly about what happened to her when she was 19, but felt she had to after she realized that the images were out there.

“It would have been used against me,” she said. “I’m glad I’m the person who found them, but to me, that’s more about luck than PimEyes working as intended. It shouldn’t exist at all.”

Exceptions to the rule

Despite saying PimEyes should be used only for self-searches, Gobronidze is open to other uses as long as they are “ethical.” He said he approved of investigative journalists and the role PimEyes played in identifying Americans who stormed the U.S. Capitol on Jan. 6, 2021.

The Times allows its journalists to use face recognition search engines for reporting but has internal rules about the practice. “Each request to use a facial recognition tool for reporting purposes requires prior review and approval by a senior member of the masthead and our legal department to ensure the usage adheres to our standards and applicable law,” said a Times spokeswoman, Danielle Rhoades Ha.

There are users Gobronidze doesn’t want. He recently blocked people in Russia from the site, in solidarity with Ukraine. He mentioned that PimEyes was willing, like Clearview AI, to offer its service for free to Ukrainian organizations or the Red Cross, if it could help in the search for missing persons.

The better-known Clearview AI has faced serious headwinds in Europe and around the world. Privacy regulators in Canada, Australia and parts of Europe have declared Clearview’s database of 20 billion face images illegal and ordered Clearview to delete their citizens’ photos. Italy and Britain issued multimillion-dollar fines.

A German data protection agency announced an investigation into PimEyes last year for possible violations of Europe’s privacy law, the General Data Protection Regulation, which includes strict rules around the use of biometric data. That investigation is continuing.

Gobronidze said he had not heard from any German authorities. “I am eager to answer all of the questions they might have,” he said.

He is not concerned about privacy regulators, he said, because PimEyes operates differently. He described it as almost being like a digital card catalog, saying the company does not store photos or individual face templates but rather URLs for individual images associated with the facial features they contain. It’s all public, he said, and PimEyes instructs users to search only for their own faces. Whether that architectural difference matters to regulators is yet to be determined.


This article originally appeared in The New York Times.

]]>
https://www.eastbaytimes.com/2022/05/28/a-face-search-engine-anyone-can-use-is-alarmingly-accurate-2/feed/ 0 8469987 2022-05-28T09:04:20+00:00 2022-05-28T10:55:40+00:00
Larry Magid: How to delete what Google knows about you https://www.eastbaytimes.com/2022/05/26/larry-magid-how-to-delete-what-google-knows-about-you/ https://www.eastbaytimes.com/2022/05/26/larry-magid-how-to-delete-what-google-knows-about-you/#respond Thu, 26 May 2022 15:00:57 +0000 https://www.eastbaytimes.com?p=8466716&preview_id=8466716 Like complaining about the weather, a lot of people complain about how Google impacts their privacy but, unlike the weather, there are things you can do about it.

Larry Magid
Larry Magid 

I’m not suggesting that Google doesn’t collect too much information from people nor am I discouraging policymakers from looking into requiring the company to better protect users. But, as a baseline, everyone should know what information Google is collecting about them and what you can do to limit or delete that information. Also, the information I’m writing about might not be all that Google collects, such as data it collects to display advertisements.

Search history

By default, Google keeps track of your search history when you’re logged in. But they also have a page (myactivity.google.com/myactivity) that will report and let you delete your Google activity. When you land on that page, you can see your search history including Google web search and Google News. You can also see other activity in Google products, including Google Calendar, Gmail and use of any Android devices and apps that you’ve run on your Android device. If you click on “Other activity,” you’re taken to a menu where you can also view YouTube activity and settings, location history, Google ad settings and much more, depending on which Google services you use.

For every activity, there is the ability to delete it from Google’s history. You can delete just the last hour’s activity, the last day, all time or set a custom range. When you delete activity, according to Google, “First, we aim to immediately remove it from view and the data may no longer be used to personalize your Google experience. We then begin a process designed to safely and completely delete the data from our storage systems.” Because the process might not be instantaneous, there is a possibility that someone could unearth that data with a legal process such as a warrant, until it’s completely gone from Google and its backup systems.

You can also turn off Google’s ability to save your searches and other future activities. If you want even more privacy when using Google, you can use your browser’s private or incognito mode and make sure you’re not logged into your Google account. That will keep both your browser and Google from keeping a record of what you’re doing, but it may not prevent websites you visit from tracking you (especially if you’re logged in) or protect you from hackers or information collected by your internet service provider or your employer if you’re using an employer-owned device or logged into your employer’s virtual private network.

Location history can be revealing

Even if you’re not concerned about security or privacy, location history can be very interesting because it lets you see a timeline of everywhere you’ve been if you have Google Maps on your phone. It goes back for years and shows you locations you’ve visited on a map, even if you didn’t use Google Maps to navigate your way there. In April 2017, my wife and I took a driving tour around Ireland, and I can see a daily breakdown of where we visited, a map of our entire drive and even photographs we took with our phone each day of the trip. I had long forgotten a meal we had at O’Sullivan’s Courthouse Pub in Dingle on April 19, but it was one of the many details of that trip that showed up on the timeline. You can delete this information or prevent it from being collected in the first place but, personally, I’m glad it’s there because it’s not only fun to review my past travels, it can be useful if I’m trying to recall details of a business trip for income tax or reimbursement purposes. Recently, my wife asked me if I could recall a hotel we stayed at years ago, and I was able to find it by locating the city on a map and looking at my activity for that day.

A map of one-days travel around Ireland
A map of one-days travel around Ireland 

Browser history

Like most browsers, Google Chrome keeps a history of sites you’ve visited. You can view that activity by clicking History from the three-dot menu in the upper right and click on specific sites to delete that record or Clear Browsing Data from the left menu. You might want to uncheck Cookies and Other Site Data, because it will log you out of most sites. If you do decide to delete those cookies (which may be a good idea for privacy reasons), be sure you know your user names and passwords because you’ll have to re-enter them.

Android phone data

If you have an Android phone, Google has even more information about you, including what apps you’re using and even more location data. Android also has a record of your incoming, outgoing and missed calls though — contrary to some internet rumors — it does not record your actual phone calls, although Google Voice users have the option of pressing 4 to record incoming calls, which will announce to the other caller that the call is being recorded.

You can delete your call history or a record of any individual call. To delete your entire history, go to the dialer, select Recents, click on the three-dot menu to the right, select call history, click on the three-dot menu again and click Clear call history.  To delete the history of all calls to or from a specific number, go to Recents, click on a call from that number, and select History.

iPhone users can learn how to delete calls from their device by Googling “delete iphone call history.”

Removing information about you from Google search

If you do a Google search on your name, your home address or your phone number, there’s a chance you’ll find pages  that reveal personal information that you might not want to be public. In some cases it can be removed from Google search, but not all cases. Data that can be removed from search can include personal info, like ID numbers and private documents, nude or sexually explicit items, content about you on sites with exploitative removal practices, content that should be removed for legal reasons and imagery of an individual currently under the age of 18. This will not remove all information about you — just certain categories — and it will not remove it from the web — just from Google search results. Also, it’s not automatic. You request the removal and Google decides if it will be removed based on their criteria.

Google isn’t the only company tracking you

These instructions are only about information Google collects from you, but there is plenty of other information about you that you need to think about. Any of your apps, browsers or websites you visit could be collecting information, especially if you are logged in. Some cars (including Tesla) can track where you’ve driven and, of course, your phone carrier has a log of your calls which could be unearthed by a warrant.

Disclosure: Larry Magid is CEO of ConnectSafely.org, a non-profit internet safety organization thar receives financial support from Google and other technology companies.

 

 

]]>
https://www.eastbaytimes.com/2022/05/26/larry-magid-how-to-delete-what-google-knows-about-you/feed/ 0 8466716 2022-05-26T08:00:57+00:00 2022-05-27T06:06:54+00:00
Larry Magid: Pair of California bills to protect kids online show promise, but there are concerns https://www.eastbaytimes.com/2022/05/19/larry-magid-pair-of-california-bills-to-protect-kids-online-show-promise-but-there-are-concerns/ https://www.eastbaytimes.com/2022/05/19/larry-magid-pair-of-california-bills-to-protect-kids-online-show-promise-but-there-are-concerns/#respond Thu, 19 May 2022 15:00:48 +0000 https://www.eastbaytimes.com?p=8455304&preview_id=8455304 California lawmakers are considering a pair of bills to regulate kids’ use of online services. One has a lot of promise, but the other may have some unfortunate unintended consequences.

Larry Magid
Larry Magid 

Although I have a few concerns over its details, I’m generally impressed by the California Age Appropriate Design Code Act (AB 2273), but I have my doubts about the Social Media Platform Duty to Children Act (AB 2408). Both bills are co-sponsored by Assembly members Jordan Cunningham (R-San Luis Obispo) and Buffy Wicks (D-Oakland). You can listen to my interview with  Assemblywoman Buffy Wicks at connectsafely.org/wicks.

The California Age Appropriate Design Code Act is designed to protect minors under the age of 18, which is different from the current federal law, the Children’s Online Privacy Protection Act (COPPA) that applies only to children younger than 13. But unlike COPPA, this bill does not completely eliminate a company’s right to collect personally identifiable information, which has effectively required social media companies to ban children under 13. Instead, the California bill smartly requires companies to default teens to “a high level of privacy protection,” something that some companies already do. It also requires companies to post their privacy information and terms of service prominently in language “suited to the age of children” who are likely to use the service. If it becomes law, the bill would also prohibit companies from using any information from people under 18 for any purpose other than to deliver their service. That makes a lot more sense than COPPA’s complete ban on collecting any personal information from children under 13.

One thing I love about this bill is that it requires the service to provide “an obvious signal” to the child when they are being tracked or monitored if the service has a feature that allows the “child’s parent, guardian, or any other consumer to monitor the child’s online activity or track their location.” I have long argued that parents should not use any parental control or monitoring tool in stealth mode.

The California bill is modeled after the UK’s  Age Appropriate Design Code, and because most social media companies that operate in California and other U.S. states also operate in the UK, many have already adopted parts of the UK code for their U.S. users.

Details matter

As always, it’s important to read the fine print and consider how this bill will be implemented, I have some concerns over the proposed task force, which would be given regulatory powers. It would be appointed by the California Privacy Protection Agency, which is itself a new agency. Who would be on this task force and to whom would they ultimately be accountable? This brand new agency isn’t fully staffed nor has it promulgated any rules. It’s important that the task force include child rights experts as well as child safety and development experts. It’s not uncommon for people who focus on child protection to take actions that could unintentionally limit child rights. Many young people turn to social media to explore and express concerns around politics, religion, sexuality, health and many other topics that are important to them.

I’m also concerned that this bill is aimed at services “Likely to be accessed by a child.” I get that they’re trying to focus on companies that have content that attracts children even if they claim that they don’t market to kids but “likely to be accessed” can include a great deal of content. The Super Bowl, for example, is watched by millions or children, but that hasn’t stopped TV networks from airing commercials for adult beverages. I know a kindergarten teacher who was unable to play children’s music from her YouTube Music Premium to her class because of YouTube’s over-cautious reaction to the Children’s Online Privacy Protection Act, which was designed to keep personal information from kids away from marketers, not to prevent teachers from playing music to their students. The content may have been aimed at children, but the person playing it was a responsible adult.

The bill also states that “age verification methods must be proportionate to the risks that arise from the data management practices of the business, privacy protective, and minimally invasive.” I agree, but it’s also important to understand that age verification is difficult in the U.S. where many children don’t have government issued ID, and privacy laws prohibit access to school and Social Security records. In 2008, I served on the Internet Safety Technical Task Force, which, after hearing from multiple experts on age verification, concluded that it wasn’t practical within the context of U.S. laws and technology. Admittedly, artificial intelligence has progressed since then, making it worth a second look, but determining whether someone is a child is trickier than it might seem.

Finally, as with all state laws affecting internet use, there is the issue of state vs. federal regulations. Because of its population size and tech presence, California regulations will likely set a floor for how the companies behave in every state. But when states pass rules that might contradict each other, it creates a confusing playing field for industry, regulators and users.

Addiction bill

I have stronger concerns about the Social Media Platform Duty to Children Act. Perhaps I’m quivering over semantics, but I’m not sure I even agree with the bill’s premise that social media is “addictive.” Although there are some psychologists and psychiatrists who believe that it is, the official bodies that represent psychiatrists and psychologists don’t classify it as such, although they do recognize that obsessive use of technology can be problematic and harmful.

Having said that, I can’t argue with the bill’s backers that many kids spend too much time on social media and have a hard time getting away from their devices. For that matter, so do many adults, but there is a long tradition of laws that protect children from things that are legal for adults.

The operative part of the bill calls for a civil penalty of up to $250,000 for a social media platform having features “that were known, or should have been known, by the platform to be addictive to children.” The service could “be liable for all damages to child users that are, in whole or in part, caused by the platform’s features, including, but not limited to, suicide, mental illness, eating disorders, emotional distress, and costs for medical care, including care provided by licensed mental health professionals.” There are some carve outs, so this bill doesn’t ban everything that makes social media sites compelling, but it nevertheless runs the risk of preventing companies from offering features that kids love and should be able to use in moderation and, in some cases, with parental supervision.

I get it. Social media companies employ techniques designed to keep people online longer and some of these affect children as well as adults. But that’s true with just about any product. There are plenty of people who consider sugar to be addictive but that doesn’t stop companies from selling and marketing sugary sweets for children. If a child becomes obese after eating an excessive amount of Ben and Jerry’s ice cream, should the company be liable for both the physical and mental health consequences? What if that child also eats a lot of Lays potato chips? Should Pepsico, which owns Frito-Lay, be sued as well. How do we know how many pounds the child gained from ice cream vs how many from potato chips and what about all the other aspects of the child’s life? Perhaps someone should sue their school for not having a vigorous enough PE program? Maybe food companies should be compelled to make their products unappealing to children as a way of preventing over consumption.

There are also people who think TV is addictive so what about shows that have a cliff hanger at the end of an episode that hooks you into watching the next one, even though it’s way past your bedtime. By that definition, I’m addicted to just about every show I’ve “binge watched.”

I don’t mean to trivialize a serious problem. My non-profit, ConnectSafely.org, has devoted a great deal of resources to helping families deal with problematic internet use but both the problem and the solution is far more complex than just limiting screen time or punishing social media companies from employing features designed to keep people online longer.

I want to end by applauding Assemblymembers Wicks and Cunningham for both these well-meaning bills.  They should be given ample consideration but it’s important to focus on all the details and possible unintended consequences.  I look forward to seeing how these bills evolve.

Disclosure: Larry Magid is CEO of ConnectSafely.org which receives financial support from Meta, Google and other technology companies that could be affected by these bills.

 

]]>
https://www.eastbaytimes.com/2022/05/19/larry-magid-pair-of-california-bills-to-protect-kids-online-show-promise-but-there-are-concerns/feed/ 0 8455304 2022-05-19T08:00:48+00:00 2022-05-19T10:52:16+00:00