For the first time, I am simply going to post a link to another person’s content: Madeline Gabriel’s post, “Should You Share That Cute Dog and Baby Photo?” on her blog “Dogs and Babies.” But of course, since I am an academic, this “simple” redirect will be followed by a few points of analysis.
A few months ago, Cathy N. Davidson wrote a blog post on HASTAC in which she argues that all schoolchildren should be taught computer programming in order to achieve a “basic computational literacy.” She laments the lack of demographic diversity in programmers and wonders “What could our world look like if it were being designed by a more egalitarian, publicly educated cadre of citizens, whose literacies were a right not a privilege mastered in expensive higher education, at the end of a process that tends to weed out those of lower income?”
USC Phd student Alex Leavitt followed her proposal by inviting other academics to make 2012 their “Year of Code.” Numerous people across the twitterverse are also participating in Codeacademy.com‘s #codeyear.
Davidson and Leavitt’s calls to code, both of which espouse a leftist politics of democratic or Do It Yourself coding, make me reflect on the different values that are currently competing in the software programming and academic spheres; proprietary models v. open access/open source models. In particular, the academic debate about open access to academic knowledge recently reared its head in Congress, when in December of 2011 the Research Works Act, an act that would block mandates of public access to federally-funded research, was introduced to the House of Representatives. This act is likely a response to recent moves on the part of the Obama administration toward better access to scientific publications (see the America COMPETES Reauthorization Act of 2010 and the subsequent Request for Information on Public Access to Digital Data and Scientific Publications). While the Research Works Act will probably not pass, it speaks to the conflict inside and outside academia between privileging information and disseminating information, between profit and public interest.
What, one might wonder, might code coming from within the academy, produced, as Davidson envisions, by an educated public, look like? And, in terms of student grades or professional tenure, how would it be evaluated?
It is an interesting exercise to compare Google and Facebook with academia. Google and Facebook are widely successful because they are a contradiction–they are free to the public and friendly to the non-expert, yet their code is secret and they make money from said public through ads. They are open but closed, profit-making but free. American academia, on the other hand, makes its “secrets” available, but only to those who pay large amounts of money and who strive to become experts.
Traditional academic tenure and evaluation is alien to the kind of collaborative (and proprietary) code farming that Google encourages. How could a tenure committee evaluate one coder out of a team of hundreds? Even with a trail of changes made by each individual, it would be almost impossible to separate that person’s work from that of others. Of course, not all coding is done collaboratively, but I would argue that most large scale projects with major impact are. As more examples of academic coding emerge, the tenure process will hopefully adjust to accommodate new modes of authorship in the digital age.
One high-profile academic seems frightened at the prospect of academia’s descent into the digital. Stanley Fish calls “‘blog'” “an ugly word” for its impermanence. As someone who wants his critical insights to be “decisive” and “all [his],” Fish dislikes thinking of himself as a blogger–a figure who seems so interconnected with everything around him that he ceases to exist. Fish is disturbed by this possible loss of identity and “linearity,” by the web’s tendency to move “into a multi-directional experience in which voices (and images) enter, interact and proliferate in ways that decenter the authority of the author who becomes just another participant.” Poor Stanley Fish experiences this every time he opens his browser.
Fish goes on to quote Kathleen Fitzpatrick as affirming this death of the author: “all of the texts published in a network environment will become multi-author by virtue of their interpenetration with the writings of others.”
I would argue that coding and other digital forms of authorship do often invoke this sense of the networked self to an even greater extent than traditional scholarship. In part that is probably because online social networks allow scholars to continually mix and concentrate their ideas with the ideas of others. Seeing one’s own voice as just one tweet in a tsunami of tweets can be a bit humbling. But then again, when people band together and find like ground, their accomplishments can be even grander than what one can do alone. There is a happy medium that can be found between solo pursuits and selfless proprietary software. I am optimistic to note that a vast amount of software developed through academic institutions is open access and open source, including as Sakai, Weka, and Stanford NLP software.
The subtitle of an August 2011 National Geographic article concludes with a rather provocative question: “Robots are being created that can think, act, and relate to humans. Are we ready?” A cursory thought about the things on my desk that need organizing, the errands that need running, and the meals that need preparing elicits a quick “of course” from me—“I’d like to have my robot now, please.” In more reflective and contemplative moments, though, I try to imagine some of the nuances of human-robot interaction (HRI), particularly how such interactions would redefine not only how we communicate with one another, but by extension, how the very notion of communication would be reshaped.
For most of us, our interactions with technology are strictly non-humanoid. We e-mail, text, tweet, upload, download, blog, skype, and share, but rarely do we speak with or come into physical contact with technologized incarnations of ourselves. And when we do, we often might not know it, since we are not in physical proximity to the telephone operator transferring our call or the app administrator playing a game with us. Of course, robots have worked on industrial assembly lines for decades, albeit in the form of robotic arms rather than embodied laborers. Increasingly, humanoid robots are also being introduced into our social and personal spheres. While far from common in the workplace or home, humanoids already have been tested as receptionists, teacher’s assistants, showroom models, companions for the elderly, and child sitters. This current adjacency to and future integration with human society compels us to reexamine what we desire in verbal, visual, and tactile modes of communication. We must ask—and answer—some weighty questions: How will these robots impact day-to-day communication? How will human-human communication be reshaped as a result of humanoid participation? When an English-speaking robot is being programmed with language, what form of English will it be? Will our existing notions about class and education be reiterated in humanoid language software? And, more broadly, in what ways will our ideas about agency and subjectivity be modified and what might “humanities” come to mean?
As humanoid robots are further integrated into the human sphere, their creators are arduously trying to make them look, sound, and move more like humans. However, as Chris Carroll and Max Aguilera-Hellweg point out in their National Geographic article, current models underscore how much humanoids do not resemble humans. From a distance, some humanoids might already “pass” as human, but up close one sees that their mouths do not close completely, their speech still comes across like “scripted observation” rather than dialogue, and their skin lacks elasticity—all of which, as Carroll and Aguilera-Hellweg remark, lends a bizarre quality to these robots. We strive to make them resemble us as much as possible. We anthropomorphize them to make them more acceptable to us. Yet, in producing robots that are “more like us” manufacturers replicate some of the more problematic aspects of our cultural and interpersonal constructs.
One particular humanoid model was subjected to a transformation that illustrates this conundrum. Yume, a humanoid robot created by Japan’s Kokoro Company, was deemed not quite believable enough to “pass,” so she was shipped off to Carnegie Mellon’s Entertainment Technology Center where five graduate students worked to revamp her and make her a worthy “other” for human communication. The result, as one of the students summarizes, is an actroid who is “‘slightly goth, slightly punk, all about getting your attention from across the room’”. While her makeover was not considered a wild success, what is noteworthy about it, I would argue, is how she has been sexualized in order to grab attention “from across the room.” The physical and sartorial attributes that render a young human female fetching and approachable in the human world have been transposed onto a carefully modeled collection of wires, metal plates, and silicone in order to make it more “believable” in whatever “entertainment” context it is destined for. But do we really want to copy and paste our current norms onto this new terrain?
It is difficult not to see elements of Narcissus’s and Pygmalion’s stories here. We seem to be so enamored of ourselves that we are willing to replicate qualities that many of us deem problematic, even detrimental, to fruitful, engaging, respectful relationships. We might not have fallen in love (yet) with these humanoids, as Pygmalion did with his creation, but the ongoing work of robotics designers suggests that the prize of near-perfect object-companions is worth the labor. Which begs still further questions: What sorts of interactions will be acceptable and what types impermissible? How far down the Stepford and “Svedka” roads do we want to go? Could increased interactions with humanoids—which lack self-awareness and emotion—broaden our understanding concerning sentience and its role in communication? Does HRI ultimately suffer because we know a light remains off in the attic even though the battery pack is fully charged? If we want to move beyond “Hello Kitty” clad Yumes, then people whose work is centered in communication need to be involved in research and development.
In 1957, James Vicary proclaimed that a movie theater in Fort Lee, NJ was broadcasting subliminal messages to viewers. More specifically, he claimed that ads flashing for 0.03 seconds for Coca-Cola and popcorn had led to an increase in sales for those items in the weeks following. As a result, the CIA subsequently banned anything that came remotely close to subliminal advertising. However, when challenged to replicate the results of this study, Vicary failed to do so, and had been deemed a hoax for decades.
Although the real results of Vicary’s study remained inconclusive, more recent work has suggested that things for which we are not fully aware can indeed influence our behavior. For example, a series of studies on “nonconscious influences” has suggested that stimuli that are too fast or otherwise weak for our sensory organs to consciously perceive may nevertheless still have a powerful effect on our thoughts and behavior. In one study in particular, researchers exposed some study participants to either an Apple logo or an IBM logo by flashing it in front of them on a screen for 2 miliseconds, below the point of conscious perception. Later, when asked to come up for uses for a brick (as a creativity assessment), the researchers found that participants who had been primed with the Apple computer logo were much more creative than those primed with the IBM logo. They reasoned that this happened because of the association between the Apple brand and creativity.
In addition to this study, there have been many other instances in which individuals’ behavior was shaped by stimuli with which they were nonconsciously primed with (and instead of providing the details of each of these studies here, googling “nonconscious influences” will lead you to find much of them). While the implications of all these findings are endless, I believe it is important to consider the consequences that nonconscious influences can have on our (and especially our students’) behavior. In a previous post, I noted how the average American is exposed to roughly 5,000 advertisements in a single day.
If the research findings in the nonconscious influence area have any merit, it’s easy to imagine the potential effects this can have. Although we try to teach our students well, we are also competing with 5,000 other stimuli they are exposed to, a majority of which they are not even aware they are perceiving. Perhaps it not our students’ fault when we get writing assignments that we deem to be “too dry” and uncreative. They may have been written on an IBM computer.
Although the issue of nonconscious influences may be a hugely complex phenomenon, I have often asked myself the question of whether there is something that I can learn from all this research, and use it to ultimately help my students in their academic endeavors. Ideally, I would love to have pictures of the Apple logo in every classroom I teach, but that doesn’t seem too reasonable or feasible, or even ethically sound. Additionally, if we educate students about the possibility of nonconscious influences on their behavior, is it even remotely likely that anything would change? And if so, what do we tell them short of cutting themselves off from all media? Thus, I invite others to provide their thoughts on this issue.
I suppose after Linell’s, John’s, and David’s timely and thoughtful responses to Grant McCracken’s Symposium keynote talk, it might be overkill or overdue to pitch in my inflation-adjusted
But seeing as some of my BLSCI colleagues might be awaiting something from one who could talk some smack but still state facts, get down to brass tacks, not exactly attack but risk a lack of tact, and maybe attract fellow hacks to take a crack at McCracken. Wise-cracks and shellackings, maybe followed by retractions and being sent home packing.
Or maybe a pact. But not exactly to shack up intellectually with this jack of all trades and his tract on value-extraction.
Alack, what to make of McCracken?
I started calling myself an anthropologist not too long ago, and since Dr. McCracken does as well, I suppose we have something in common. I suppose our differences are an invitation for me to police the boundaries of our discipline. The stakes seem to be broader than just defining what a proper understanding of anthropology or ‘culture’ can or should be. In any case, for all their propensity to deploy opaque jargon, anthropologists don’t maintain a monopoly on the concepts and methodologies of their field. Ethnography is increasingly popular in business, law, design, as well as other academic disciplines. The right to talk about culture belongs to everyone. I don’t think many anthropologists would object to that sentiment.
That said, McCracken’s take-away message was that successful companies need to be hip to culture and its vagaries, especially of a certain category of people he referred to repeatedly as the ‘Qydz.’
The Qydz are, as I understood McCracken, a rather large and underexamined tribe. They actually live among us, rather than in some faraway rainforest or mountainous highland. (At least, we aren’t so interested in the Qydz residing in such remote lands.)
These Qydz are the lifeblood of contemporary capitalism. Any business worth its salt should devote its energies toward studying the values and aesthetic tastes of this people. For the Qydz are nothing else if not consumers. And oh, the stuff they consume! Baggy jeans! Flip-out keyboard texting gizmos! Snapple!
Apparently, the Qydz are not born or raised. They have no provenance, no parentage, no institutions that foster their development. They simply appear in their present form (or ‘respawn’ as they might say in their own video-game parlance), as autonomous beings arranged into ‘generations’ we can only designate as ‘X’ or ‘Y’ (no word yet on any Generation Z sightings). Qydz culture prizes individualism, but their collective will is mighty and a thing to be feared only if business does not have the products to appease them.
- McCracken is right to suggest that capitalism has been increasingly dependent on the desires of consumers as a resource to mine and extract value. (Actually, he never said this outright, but it seems central to his research agenda.) Is this a fair assessment of capitalism, Linell seems to ask in the previous post? I would add, is this a fair assessment of desire?
For McCracken, the wants of the Qydz are limited only to their own imaginations, which, he contends, are limitless. Business can only hope to track the Qydz desires by means of increasingly sophisticated trend-tracking technology and–gasp!–ethnographic methods. Yes, really getting to ‘hang’ with some Qydz is a thrilling and potentially dangerous experience.
Academics spend oodles of time with Qydz, but McCracken may lament the time professors waste speaking to them, teaching them of our ways of life, rather than listening to and observing them. Pity.
It is increasingly clear that the Qydz are a natural resource we must safeguard carefully, lest they begin to imagine and wish for things business cannot manufacture and sell to them.
Several of us have been preparing and sharing ideas ahead of our faculty roundtable discussion today. For you Baruchians, it will take place Tuesday, April 12, 2:3o-4pm, in the SOC/ANT department conference room.
We will talk about sources, citations, designing plagiarism-resistant assignments, using technology in research, turnitin.com, and more.
The subject has me reflecting on a book that I read months ago but has yet to release me of its coiling grip. It seems absurd to say this, but The Culture of the Copy, by Hillel Schwartz (Zone Books, 1996), is utterly original. It’s hard to imagine a more kaleidoscopically visionary 565 pages. Maybe I exaggerate, for irony’s sake, but this is essentially a cultural history of copies, fakes, forgeries, doubles, twins, reproductions, and the like. The focus is a sidelong view of our obsession (and ambiguity) vis-a-vis originality, authenticity, singularity, and identity. Its central argument is, I think, that our human nature, the making of ourselves, has always been the making of doubles and likenesses. Schwartz is keenly interested in moments when facsimiles stand in for originals, when duplicates dupe, when samples take on their own lives. The book’s introduction (cleverly titled “Refrain”) is the story of the man known as the Real McCoy, and this biographical story itself also functions as a recapitulation of the rest of the book. It’s an entertaining read, letting the myriad curiosities and strange tales speak for themselves, and yet the back of the book contains more than 150 pages of endnotes to satisfy the scholar.
I will stop short of a book review here. There are some very provocative insights throughout, but I will stick to the several pages Schwartz discusses plagiarism, which comes on the heels of this conclusion about sampling: “Sampling is what imperialists did when they colonized ‘undeveloped’ lands, calling theft ‘development'; sampling is what ghettoized colonies do in revolt against property laws wired around them” (310).
Schwartz traces complaints of plagiarism back into antiquity, suggesting that it is not a feature solely of literate societies. There are audacious examples galore: “Samuel Taylor Coleridge rabidly charged others with theft, but his own perpetual plagiary he considered a form of spirit possession: ‘I regard truth as a divine ventriloquist. I care not whose mouth the sounds are supposed to proceed…” I doubt many Baruch students can claim the right to rip off with such transcendental air, perhaps underlining how plagiarism is defined morally as a debased form of copying. Appropriating in the name of poetry is not quite plagiarism?
Plenty of ironic cases in the history of plagiarism:
- A passage on seeing double was stolen repeatedly by 18th-century scientists.
- The first book on photography published in the US retouched an English book.
- Victorian ministers hand copied sermons on honesty from printed books to make them look like originally penned texts.
- The Boston Globe ran a story on a plagiarized 1991 commencement speech that was published in the New York Times.
- Lexicographers responsible for defining plagiarism were accused of plagiarizing definitions.
- A University of Oregon booklet plagiarized its section on plagiarism. (312-13)
Schwartz is gloomy about defending against plagiarism: “our culture of the copy tends to make plagiarism a necessity, and the more we look for replays to be superior to originals, the more we will embrace plagiarism as elemental.” (313)
The radical left has offered solutions: “the 1988 Festivals of Plagiarism in Glasgow, London, San Francisco, and Berlin exalted plagiarism as a defiance of capitalism, whose commodification of the world and of art proceeds upon the pretense of originality and the projection of uniqueness… plagiarism must be a thoughtful assault upon privilege, retaking that which should belong to everyone” (314).
After more citations of students and scholars caught plagiarizing papers and exasperatedly insisting they thought it was their own words, Schwartz concludes: “Plagiarism in our culture of the copy is sticky with feelings of originality-through-repetition, revelation-through-simulation. That plagiarism should be taken up on all sides–as a means for subverting the System and as a means for getting an edge in business, science, or politics–is proof of its centrality and the reason why plagiarism is treated so gingerly, defended so boldly, resumed so intemperately. Like forgery, plagiarism is a personal addiction… Plagiarism is, moreover, a cultural addiction, and I use that word with malice, for the ubiquity of the metaphor of addiction is itself a clue to our embrace of the rhetoric of replay despite a professional anxiety about disorders of repetition” (315).
Do you think plagiarism is not an epidemic but endemic not only to the academic world but also scientific, political, business, and cultural life? If so, do we need a new paradigm to deal with the matter of intellectual and cultural property in an age of mass duplication and duplicity?
Reflecting on John’s recent post on Japan, as well as my last contribution to this forum, I think it is time we do indeed start thinking and talking about our implicatedness in the transformations in and of the earth itself. In the wake of the earthquake-tsunami-nuclear crisis in Japan, the New York Times gently reminded readers:
Three of the world’s chief sources of large-scale energy production — coal, oil and nuclear power — have all experienced eye-popping accidents in just the past year. The Upper Big Branch coal mine explosion in West Virginia, the Deepwater Horizon blowout and oil spill in the Gulf of Mexico and the unfolding nuclear crisis in Japan have dramatized the dangers of conventional power generation at a time when the world has no workable alternatives able to operate at sufficient scale.
In all three scenarios, we heard from leaders and experts assume the mantle of authority to dissuade the panic-stricken from questioning our energy economy, or–gasp!–suggesting we make meaningful moves towards alternatives. These ‘accidents,’ as the NYT itself terms them, continue to be framed as matters of risk management, regulation, and oversight.
Let me suggest a different take the environmental and health risks of nuclear meltdowns, oil spills, and mine explosions are not technical failures but political ones. The expertise and ownership infrastructures necessitated and supported by these industries are what have produced “irrational fears about risk.” Why do we live in a world where people don’t know what processes power their lightbulbs, washing machines, and computers? We need a renewed global conversation about energy, technology, and democracy now.
As a colleague of mine reminded me recently, this conversation has precedents: see Ivan Illich’s 1973 essay “Energy and Equity”. A pithy excerpt:
Even if nonpolluting power were feasible and abundant, the use of energy on a massive scale acts on society like a drug that is physically harmless but psychically enslaving. A community can choose between Methadone and “cold turkey”—between maintaining its addiction to alien energy and kicking it in painful cramps—but no society can have a population that is hooked on progressively larger numbers of energy slaves and whose members are also autonomously active.
I want to draw attention to the ideological blindspots hidden in the notion that ‘natural disasters’ bring people together under the banner of humanitarianism. This is the imperative sense of our moral responsibility (‘response-ability,’ as John framed it), and there is nothing wrong with it: we need ever more of this kind of altruism and less cynicism. But the thing about natural disasters is how they naturalize many aspects of our world that are not natural. In fact, we see this view as a smokescreen for all kinds of new projects of class power, as documented in Naomi Klein’s The Shock Doctrine. As geographer Neil Smith noted about Hurricane Katrina, a catastrophe that effectively functioned as a mass eviction of poor people in New Orleans, “far from flattening the social differences, disaster reconstruction invariably cuts deeper the ruts and grooves of social oppression and exploitation.”
This brings up the question I posed before: what kind of horror-movie is contemporary capitalist society? Comedian Patton Oswalt offered three possibilities: zombies, spaceships, wastelands. In the midst of the current Japanese calamity, it seems appropriate to call for the return of the monster movie.
Many American audiences enjoyed and dismissed Godzilla as a campy sci-fi flick and thus missed its scathing critique of the nuclear age. The monster, a symbol of science gone berserk, appeared in cinemas in 1954, the same year as the thermonuclear detonation on Bikini Island. “Audiences who flocked to “Gojira” were clearly watching more than just a monster movie. The film’s opening scenes evoked the nuclear explosion in the Pacific and the damaged Japanese bodies so poignant to domestic viewers. Godzilla — relentless, vengeful, sinister — looms as an overt symbol of science run amok. The creature’s every footstep and tail-swipe lay bare the shaky foundations on which Japan’s postwar prosperity stood,” notes Peter Wynn Kirby. (Interestingly, a new monster film by Guillermo Del Toro, ‘Pacific Rim,’ has come under pressure to ensure ‘insensitive’ references to Japan being attacked are excised from the screenplay.) I wonder what idiom the political mobilization against the excesses of the science/energy industrial complex might have to develop to capture people’s attention the way Godzilla did in the 1950s.
So, I am concerned and skeptical about the attempts to silence political debate under the rubric of “we must all band together in a crisis.” Human beings as a global society are transforming the earth to the extent that our collective activities are increasingly entangled with so-called natural processes. Some have harkened in this era as the ‘Anthropocene.’ Perhaps there is no way back, but there must be a different way forward.
Page A15 of the New York Times on March 7th looked suspiciously like a story from The Onion about the tangled mess that is teacher evaluation in New York City public schools. Winning the award for the most understated headline of the year, “Evaluating New York Teachers, Perhaps the Numbers Do Lie,” Michael Winerip tells the (predictably?) sad story of Stacey Isaacson, a 7th grade English and Social Studies teacher at the Lab school, described as “very dedicated,” “wonderful,” and “one of a kind,” by teachers, students, and principals alike.
So why, then, is poor Ms. Isaacson ranked in the 7th percentile of city teachers when it comes to student academic progress?
Because of this formula, designed to calculate a teacher’s value-added score by the Department of Education’s “accountability experts” (satirists, start your engines):
As someone who once taught for the NYC Department of Education and is also a product of it, I wasn’t really surprised that they had gotten it all wrong. I wasn’t even surprised to imagine that they would think such a formula could be an accurate method for tenure evaluation. They did, however, outdo themselves in the category of overall incoherence; not only did this tool strike me as wrong-headed, but it was also completely unintelligible. This is so unbelievably unhelpful a formula (ready-made for critique by visualization genius Edward Tufte), that no teacher could be expected to look at it and see her work (or her true challenges) reflected within it. Matrix-like in its complexity and opaque in its reasoning, it is a formula incapable of communicating what it is measuring or how a teacher might improve her practices based upon it. And from what I can tell, the variables are wonky, too.
It is not until the 16th paragraph of the article that Winerip summons the courage to try to explain the thing:
According to her “data report,” Isaacson’s students had a prior proficiency score of 3.57. “Her students were predicted to get a 3.69– based on the scores of comparable students around the city. Her students actually scored a 3.63. So Ms. Isaacson’s valued added in 3.63-3.69.” Simple enough, right? Wrong. The author– who knows he’s hit pay dirt with this one– goes on:
“These are not averages. For example, the department defines Ms. Isaacson’s 3.57 prior proficiency as ‘the average prior year proficiency rating of the students who contribute to a teacher’s value added score.”
Eh? And the calculation for her predicted score is based on 32 variables, which are plugged into a statistical model– the one that made me feel like I was, surely, reading The Onion.
Anyone reading this case study of Ms. Isaacson will naturally wonder a few things, like, “Wouldn’t it be fun to calculate what percentage of Joel Klein’s contract at Fox News Corporation represents Ms. Isaacson’s salary?” or, “Wouldn’t it be interesting to invite these statisticians to actually teach us this formula and how it works?” I frequently work on assessment at the Schwartz Institute, and it is also a built-in aspect of every course I teach. So I know that evaluating teaching and learning is a tricky thing indeed, a hall of mirrors in which you think you see the student reflected but often, you don’t.
I decided, then, to concoct my own formula, with my own variables, to evaluate the teaching that I do at Baruch in my capacity as a Fellow and an instructor of Communication Studies. What variables get in the way of student progress that cannot be accounted for after you have observed my class, read my syllabus, and tested my students for their proficiency level?
What if you really tried to articulate the variables that come into play when facing a group of students and a set of learning objectives?
Winerip explains that teachers are eligible for tenure based upon three categories: instructional practices (including observations), contribution to the school community, and student achievement (which is where the formula comes in). Now, I’ve never been much of a whiz at statistics, but maybe that’s okay. After all, if the communications people made the formulas, and the formula people made the communications, perhaps we’d all start getting somewhere?
So please—in the spirit of collaborative learning, improve upon my draft and post your own visual and/or variables in the comments section.