I’ve been thinking about technology policy in the United States in light of Mitt Romney’s May 17, 2012 comments that nearly half of United States citizens “are dependent upon government, believe that they are victims, believe the government has a responsibility to care for them, believe that they are entitled to health care, to food, to housing…pay no income tax…[and can] never [be] convince[d]…to take personal responsibility and care for their lives.”
I’ll leave debunking the myths about poor and working people that Romney was relying on–and perpetuating–to others (for example, try Haroon Siddique or Michael Cooper on the taxes that are indeed paid by poor and working people). But I will take the opportunity to look at the ways that technology policy and reporting often make the same kind of false and damaging suppositions about people struggling to meet their basic needs, suppositions that misrepresent the reality of working people’s lives, intelligence, desires and opportunities.
There has been a disturbing recent trend of describing universal access programs as “welfarist.” For example, I recently visited Champaign and Urbana, Illinois, whose UC2B project promises to connect 2,500 households in underserved neighborhoods to a fiber-optic big broadband network for free. While I was there, the Daily Illini, the student newspaper of the local university, ran an article about the project in which a community member described the project as “welfare Internet,” arguing that public tax dollars, through a federal grant, are being used unfairly to serve primarily poor and working-class households.
Or take the recent New York Times article about Google’s attempts to interest poor and working-class residents of Kansas City, MO–particularly in African-American neighborhoods–to sign up in advance for their $70-120/month high-speed broadband service. When some community members balked at signing up and putting $10 down for an unclear deal that might cost big bucks in the future, journalist John Eligon argued that the primary struggle for Google was “convincing residents of the importance of Internet access — to apply for jobs, do research, take classes and get information on government services,” as if residents simply didn’t understand why the internet is significant.
Assessments such as these are often couched in patronizing faux-concern about the lurking dangers of high-tech tools and networks for those unable to afford them on the open market.* For example, a recent New York Times article, “Wasting Time is New Divide in Digital Era,” argued that children of parents who do not have a college degree are exposed to 1.5 more hours of media per day through televisions, computers and other gadgets than the children of parents with a college degree. Despite the fact that both groups of children used their high-tech tools primarily to watch videos, play games and connect to social media sites, the article decried a new and growing “time wasting gap,” suggesting that poor and working-class children are using internet technologies to avoid homework, stay up too late, and mortgage their futures to the momentary pleasures of the now. Sound familiar? This is as opposed to, for example, imagining that poor and working kids might lack access to the kinds of after school sports and enrichment activities widely available to children of the professional middle and owning classes, and therefore spend a little more time watching TV and scanning Facebook.
I find the idea that poor and working people lack an understanding of the importance of the internet, don’t deserve access, and misuse it when they do manage to get their hands on it deeply insulting. I hear in these stories disturbing echoes of the crudest cliches about the supposed ignorance, laziness and backwardness of people who struggle to meet their basic needs.
There are other factors at play in the complicated relationship between technology and working people, factors that rarely get any attention in the mainstream media. For example,
— The consequences of data profiling, data mining and privacy intrusions are significantly more severe for working people and other marginalized groups (see Seeta Gangadharan’s “Digital Inclusion and Data Profiling“)
— Working people, women and men of color tend to disproportionately experience the more negative uses of technology in the workplace, in their neighborhoods and in their interactions with government (see my book Digital Dead End)
— Poor and working folks tend to know a market lock when they see one. Their experience with predatory lending, pay-as-you-go phones, rent to own agreements, payday loans, and other scams has taught them that the “Buy now, Pay later” approach–of Google Fiber, for example–is rarely a good deal for them (see Gary Rivlin’s terrific Broke, USA).
It might be easier for the media and policy-makers to draw on common stereotypes to posit that poor and working people don’t understand technology, are afraid of it, and won’t put it to good use anyway. But it is remarkable to me that a country so plagued with class inequality still looks for behavioral and individual explanations of the desperate poverty so many Americans experience. The kinds of assertions I cataloged above rely on a presumption that simply isn’t true: that poverty in the richest country in the world is an aberration, experience by a damaged and suspect few.
As Mark Rank has shown in his superb One Nation, Underprivileged, poverty is not the minority experience in the US — the majority of us will face it at some point in our lives. Fifty-nine percent (59%) of Americans will live at least one year of their lives under the official poverty line ($11,170 a year for a single individual in 2012). Sixty-five percent (65%) of all Americans will at some point live in a household that receives means-tested welfare benefits, including SNAP/Foodstamps, SSI, Medicaid, AFDC, etc.
When we recreate myths about economic inequality–and the people who experience poverty–in our technology policy and reporting, we do all Americans a disservice. How would our policy be different if we understood communication–as facilitated by high-tech devices–as a fundamental human right central to the health and vigorous democratic functioning of our communities? What if the answer to “who deserves the internet?” was all of us?
* A significant number of people in the US cannot afford to pay for internet service, which is not surprising when you take into account the fact that, according to David Cay Johnson’s new book The Fine Print, Americans pay 38 times what the Japanese pay for internet service per bit of data moved, and that we pay more for internet connections that are ranked 29th in the world in terms of speed, behind Lithuania, Ukraine, and Moldavia.
With respect to the “time wasting,” it’s important to note that social media is *social*. People who can’t afford transportation or entertainment costs can still socialize with peers over the net. A number of people with disabilities use social media to foster their social-well being, a critical aspect of physical and mental health. Qualifying time spent on social media as inherently wasted suggests that the less affluent or differently abled classes have fewer rights to socialize. The core of this punitive mindset is decidedly Dickensian.
Social interaction is an important aspect of time spent on the internet, particularly for children – that demo whose primary job is to learn how to interact with their community. Given the increasing need to develop informational literacy, spending time on the internet would seem to provide opportunities to do both. Look at the learning curve of adults who first engage with social media, there’s often an initial (sometimes persistent) disconnect between social mores in physical venues and those in play over the internet. If our social commons are to be digital, then we should encourage young people to become comfortable with the medium. We desperately need more civil civic discussion.