"

7 Connectivism and Connective Knowledge: Essays on Meaning and Learning Networks

Stephen Downes

Connectivism and Connective Knowledge: Essays on meaning and learning networks

Stephen Downes

National Research Council Canada

 

ISBN: 978-1-105-77846-9

Version 1.0 – May 19, 2012 Stephen Downes

Copyright (c) 2012

This work is published under a Creative Commons License Attribution-NonCommercial-ShareAlike CC BY-NC-SA

This license lets you remix, tweak, and build upon this work non-commercially, as long as you credit the author and license your new creations under the identical terms.

View License Deed: http://creativecommons.org/licenses/by-nc-sa/3.0

 

 

Contents

 

Networks, Connectivism and Learning 53

Semantic Networks and Social Networks 55

The Space Between the Notes 62

The Vagueness of George Siemens 63

Network Diagrams 65

What Networks Have In Common 68

The Personal Network Effect 73

Diagrams and Networks 78

The Blogosphere is a Mesh 80

The Google Ecosystem 84

What Connectivism Is 85

Connectivism and its Critics: What Connectivism Is Not 92

Connectivism and Transculturality 95

Theoretical Synergies 110

A Truly Distributed Creative System 118

The Mind = Computer Myth 122

What’s The Number for Tech Support? 124

Informal Learning: All or Nothing 128

Non-Web Connectivism 132

Meaning, Language and Metadata 139

Principles of Distributed Representation 141

When Words Lose Meaning 164

Concepts and the Brain 167

On Thinking Without Language 169

Planets 171

Types of Meaning 172

Naming Does Not Necessitate Existence 175

Connectivism, Peirce, and All That 191

Brakeless Trains – My Take 193

Meaning as Medium 197

Patterns of Change 199

Speaking in LOLcats 222

     

     

    ‌Networks, Connectivism and Learning

    ‌Semantic Networks and Social Networks

     

    This article published as Semantic Networks and Social Networks in The Learning Organization Journal Volume 12, Number 5 411-417 May 1, 2005.43

    Abstract

    Purpose: To illustrate the need for social network metadata within semantic metadata.

    Design/methodology/approach: Surveys properties of social networks and the semantic web, suggests that social network analysis applies to semantic content, argues that semantic content is more searchable if social network metadata is merged with semantic web metadata.

    Findings: The use of social network metadata will alter semantical searches from being random with respect to source to direct with respect to source, which will increase the accuracy of search results.

    Research limitations/implications: Suggests that existing XML schemas for semantic web content be modified.

    Practical implications: Introduction and overview of a new issue.

    Originality/value: Foundational to the concept of the semantic social network; will be useful as an introduction to future work.

    Keywords: Information networks, Internet, Social networks Paper type: Conceptual paper

    Semantic Networks and Social Networks

    A social network is a collection of individuals linked together by a set of relations. In discussions of social networks the individuals in question are usually humans, though work in social network theory has found similarities between communities of humans and, say, communities of crickets44 or members of a food web.45 Entities in a network are called “nodes” and the connections between them are called “ties”.46 Ties between nodes may be represented as matrices, and the properties of these networks therefore studied as a subset of graph theory.47

    A key property of social networks is that nodes that might be thought of as widely distant from each other – a farmer in India, say, and the President of the United States – may actually be much more closely connected that otherwise imagined. This phenomenon, sometimes known as “six degrees”, was measured48 and, as the name suggests, no more than six steps were required to connect any two people in the United States.49 With the arrival of the internet as a global communications network ties between individuals became both much easier to create and much easier to measure.

    Social networking web sites fostering the development of explicit ties between individuals as “friends” began to appear in 2002. Sites such as Friendster, Tribe, Flickr the Facebook and LinkedIn were early examples. Less explicitly based on fostering relationships than, say, online dating sites, these sites nonetheless sought to develop networks or “social circles” of individuals of mutual interest. LinkedIn, for example, seeks to connect potential business partners or prospective employers with potential employers. Flickr connects people according to their mutual interest in photography. And numerous sites offer dating or matchmaking services. After an initial surge of interest, however, social networking sites have tended to stagnate50 It is arguable that social networking, by itself, has limited practical use.

    The semantic web, as originally conceived by Tim Berners-Lee, “provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries”51 Developed using the resource description framework, it consists of an interlocking set of statements (known as “triples”). “Information is given well-defined meaning, better enabling computers and people to work in cooperation.”52 The semantic web is therefore, a network of statements about resources.

    In particular, RDF enables the creation of statements intended to describe different types of resources. The terms used in these statements are defined in schemas, themselves RDF documents, which list the terms to be used and (in some cases) the types of values allowed, and the relations between them. “Using RDF Schema, we can say that ‘Fido’ is a type of ‘Dog’, and that ‘Dog’ is a sub class of animal.” Beyond schemas, ontologies enable complex representations of related entities and their descriptions.

    Though applications of the semantic web in particular have thus far been limited, there have emerged since its introduction numerous projects characterizing and encoding descriptions of different types of resources in XML.53 The majority of these projects seem to be centred around the classification of information and resources. For example, learning object metadata (LOM) describes learning resources. Dublin Core provides bibliographic information about resources. These resources are typically identified explicitly in the XML or RDF, typically using a uniform resource identifier (URI) based on its address on the world wide web, or via some other form of identifier system, such as digital object identifier (DOI).

    Outside professional and academic circles, arguably the most widespread adoption of the semantic web has been in the use of RSS. RSS, known variously as rich site summary, RDF site summary or really simple syndication, was devised by Netscape in order to allow content publishers to syndicate their content, in the form of headlines and short introductory descriptions, on its My Netscape web site.54 The use of RSS has increased exponentially, and now RSS descriptions (or its closely related cousin, Atom) are used to summarize the contents of 100s of newspapers and journals, weblogs (including the roughly eight million weblogs hosted collectively by Blogger, Typepad, LiveJournal and Userland), wikis and more.

    There are no doubt purists who deny that RSS is an instantiation of the semantic web. However, all RSS files are undeniably written in XML, and a type of RSS (specifically, RSS 1.0) is explicitly written in RDF.55 At its core, RSS consists of some simple XML elements: a “channel” element defining the publication title, description and link; and a series of “item” elements defining individual resource titles, descriptions and links. Since, RSS 1.0, however, the RSS format has allowed these basic elements to be extended; the role of schemas is fulfilled by namespaces, and these namespaces define (sometimes implicitly) a non-core vocabulary. Such extensions (also known in RSS 1.0 as “modules”) include Dublin Core, Creative Commons, Syndication and Taxonomy.56

    Initiatives to represent information about people in RDF or XML have been fewer and demonstrably much less widely used. The HR-XML (Human Resources XML) Consortium has developed a library of schemas “define the data elements for particular HR transactions, as well as options and constraints governing the use of those elements”.57 Customer Information Quality TC, an OASIS specification, remains in formative stages.58 And the IMS learner information package specification restricts itself to educational use.59 It is probably safe to say that there is no commonly accepted and widely used specification for the description of people and personal information. As suggested above, developments in the semantic web have addressed themselves almost entirely to the description of resources, and in particular, documents.

    Outside the professional and academic circles, there have been efforts to represent the relations between persons found in social networks explicitly in XML and RDF. Probably the best known of these is the Friend of a Friend (FOAF) specification.60 Explicitly RDF, a FOAF description will include data elements for personal information, such as one’s name, e-mail address, web site, and even one’s nearest airport. FOAF also allows a person to list in the same document a set of “friends” to whom the individual feels connected. A similar initiative is the XHTML Friends Network (XFN) (GPMG, 2003). XFM involves the use of “rel” attributes within links contained in a blogroll (a “blogroll” is a list of web sites the owner of a blog will post to indicate readership).

    Though FOAF and XFN have obtained some currency, it is arguable that they have declined to the same sort of stagnation that has befallen social network web sites. While many people have created FOAF files, for example, few applications (and arguably no useful applications) have been developed for FOAF. And while some useful extensions to FOAF have been proposed (such as a trust metric, PGP public key, and default licensing scheme), these have not been adopted by the community at all.

    Perhaps, given the demonstrable lack of enduring interest in social network systems, either site- based, as in LinkedIn? and Orkut, or semantic web-based, as in FOAF or XFN, it could be argued that there is no genuine need for a social network system (beyond, perhaps, matching and dating sites). Perhaps, as some have argued, such systems, once they get too large to be manageable, simply collapse in on themselves, their users suffocated under the weight of millions of enquiries and advertising messages, as happened to e-mail, Usenet and IRC.61

    But the evidence seems to weigh against this supposition. Certainly, the management of personal information has long been touted as necessary for authentication. Authentication – i.e. a mechanism of proving that a person is who they say they are – is used to control access to restricted information. Projects such as Microsoft s Passport and the liberty alliance have for years attempted to promote a common authentication scheme. Sites such as LiveJournal and Blogger have begun to require login access in order to submit comments, as a means of discouraging spam. Newspapers, online journals and online communities typically require some sort of login process. Projects such as SxIP and light-weight identity (LID)62 have attempted to create a single sign-on solution for logins. So there is a need for personal descriptions, at least to control access.

    We could perhaps leave descriptions of identity as something for individual sites to work out were there not wider issues pertaining to the semantic web that also require at least some element of personal identity to address. To put the problem briefly: so long as descriptions of resources are based solely on the content of those resources then users of the semantic web will be hampered in their efforts to learn about new resources outside the domain of their own expertise. The reason for this is what might be called the “dictionary principle” – in order to find a resource, the searcher must already know about the topic domain they are searching through, since resources are defined in terms specific to that domain (in other words if you want to find a word in a dictionary, you have to already know how to spell it).

    In fact, what has tended to happen in the largest current implementation of the semantic web, the network of RSS resources, is that searchers have, within certain parameters, tended to seek out resources randomly. They type in a search term in Google, for example, without any foreknowledge of where the resource they are seeking will turn up. They tend to link to sources they find in this manner; thus, the network of connections between resources (expressed in RSS, as on web sites, as links) manifests itself as a random network.

    The proof of this is found in the studies of social networks discussed at the beginning of this paper. The links found in web pages are instances of what are known as “weak ties”. Weak ties are are acquaintances who are not part of your closest social circle, and as such have the power to act as a bridge between your social cluster and someone else’s.63 Weak ties created at random in this way lead to what Gladwell called “supernodes” individuals with many more ties than other resources. (Gladwell, in other words, some sites get most of the links, while most others get many fewer links. “A power-law distribution basically indicates that 80 per cent of the traffic is going to 1 per cent of the participants in the network.”64

    Numerous commentators, from Barabasi forward, have made the observation that power laws occur naturally in random networks, and some pundits, such as Clay Shirky, have shown that the distribution of visitors to web sites and links to web sites follow a power law distribution.65 Our purpose here is to take the inference in the opposite direction: because readership and linkage to online resources exhibits a power law distribution, it follows that these resources are being accessed randomly. Therefore, despite the existence of a semantic description of these resources, readers are unable to locate them except via the location of an individual – a super connector – likely to point to such resources.

    It is reasonable to assume that a less random search would result in more reliable results. For example, as matters currently stand, were I to conduct a search for “social networking” then probability dictates that I would most likely land on Clay Shirkey, since Shirky is a super- connector and therefore cited in most places I am likely to find through a random search. But Shirky s politician affiliation and economic outlook may be very different from mine; it would be preferable to find a resource authored by someone who shares my own perspective more closely. Therefore, it is reasonable to suppose that if I were to search for a resource based on both the properties of the resource and the properties of the author, I would be more likely to find a resource than were I to search for a random author.

    Such a search, however, is impossible unless the properties of the author are available in some form (presumably, something like an RDF file), and also importantly, that the properties of the author are connected in an unambiguous way to the resources being sought.

    I have proposed66 that social networking be combined explicitly with the semantic web in what I have called the semantic social network (SSN). Essentially, SSN involves two major components: first, that there be, expressed in XML or RDF, descriptions of persons (authors, readers, critics) publicly available on the web, sometimes with explicit ties to other persons; and second, that references to these descriptions be employed in RDF or XML files describing resources.

    Neither would at first glance seem controversial, but as I mention above, there is little in the way of personal description in the semantic web, and even more surprisingly, the vast majority of XML and RDF specifications identify persons (authors, editors, and the like) with a string rather than with a reference to a resource. And such strings are ambiguous; such strings do not uniquely identify a person (after all, how many people named John Smith are there?) and they do not identify a location where more information may be found (with the result that many specifications require that additional information be contained in the resource description, resulting in, for example, the embedding of VCard information in LOM files).

    It should be immediately obvious that the explicit conjunction of personal information and resource information within the context of a single distributed search system will facilitate much more fine-grained searches than either system considered separately. For example, were I look for a resource on social networks , I may request resources about social networks authored by people who are similar to me , where similarity is defined as a mapping of commonalities of personal feature sets: language and nationality, say, commonly identified friends , or even similarities in licensing preferences. Or, were I to (randomly or otherwise) locate an individual with, to me, an interesting point of view, I could “search for all articles written by n and friends of n”.

    Identity plays a key role in projected future developments of the semantic web. In his famous architecture diagram , Tim Berners-Lee identifies a digital signature as being the backbone of RDF, ontology, logic and proof.67 A digital signature establishes what he calls the provenance of a statement: we are able to determine not only that “A is a B”, but also according to whom A is a B. “Digital signatures can be used to establish the provenance not only of data but also of ontologies and of deductions.”68 But as useful as a digital signature may be for authentication, a digital signature is an unfaceted identification. To know something about the person making the assertion, it will be necessary to attach a personal identity to an XML or RDF description.

    As we examine the role that personal identity plays in semantic description, it becomes apparent that much more fine-grained descriptions of resources themselves become possible. For there are three major ways in which a person may be related to a resource: as the author or creator of the resource; as the user or consumer of the resource; and as a commentator or evaluator of the resource. Each of these three types of person may create metadata about a resource. An author may give a resource a title, for example. A user may give the resource a “hit” or a reference (or a “link”). And a commentator may provide an assessment, such as “good” or “board certified”. Metadata created by these three types of persons may be called “first party metadata”, “second party metadata” and “third party metadata”, respectively.

    The semantic web and social networking have each developed separately. But the discussion in this short paper should be sufficient to have shown that they need each other. In order for social networks to be relevant, they need to be about something. And in order for the semantic web to be relevant, it needs to be what somebody is talking about. Authors need content, and content needs authors.

    Footnotes

    43 Stephen Downes. Semantic Networks and Social Networks. The Learning Organization Journal Volume 12, Number 5 411-417 May 1, 2005. http://www.ingentaconnect.com/content/mcb/119/2005/00000012/00000005/art00002

    44 M. Buchanan. Nexus: Small Worlds and the Groundbreaking Science of Networks. 2002. Perseus Publishing, Cambridge, MA. p. 49.

    45 Ibid. p. 17.

    46 J.M. Cook. Social Networks: A Primer. Ebook. 2001. Available at: http://www.soc.duke.edu/jcook/networks. html

    47 Garton, L., Haythornthwaite, C. and Wellman, B. “Studying online social networks”, JCMC, Vol. 3 No. 1. 1997. http://www.ascusc.org/jcmc/vol3/issue1/garton.html

    48 Stanley Milgram. “The Small World Problem”, Psychology Today, pp. 60-7, May 1967. http://smallworld.sociology.columbia.edu/ (link not currently functioning).

    49 Buchanan, M. Nexus: Small Worlds and the Groundbreaking Science of Networks, Perseus Publishing, Cambridge, MA. 2002.

    50 J. Aquino. The Blog is the social network. Weblog Post. 2005. http://jonaquino. blogspot.com/2005/04/blog-is-social-network.html

    51 World Wide Web Consortium (W3C). Semantic Web. Paper presented at the World Wide Web Consortium (W3C). 2001. http://www.w3.org/2001/sw/

    52 Tim Berners-Lee, Hendler, J. and Lassila, O. The Semantic Web. Scientific American, May, 2001. http://www.scientificamerican.com/article.cfm?articleID=00048144-10D2-1C70-84A9809EC588EF21&catID=2

    53 Stephen Downes. Canadian Metadata Forum – Summary. Stephen’s Web (weblog). September 20, 2003. http://www.downes.ca/post/52

    54 Stephen Downes. Content Syndication and Online Learning. Stephen’s Web (weblog). September 22, 2000. http://www.downes.ca/post/148

    55 G. Beged-Dov, et al. RDF Site Summary 1.0. 2001. http://web.resource.org/rss/1.0/ spec

    56 G. Beged-Dov, et al. RDF Site Summary 1.0 Modules. 2001. http://web.resource. org/rss/1.0/modules/

    57 HR-XML Consortium. “Downloads”. 2005. p. 139.

    58 OASIS. Customer Information Quality TC. 2005. http:// www.oasis-open.org/ committees/ciq/charter.php

    59 IMS Global Learning Consortium. IMS Learner Information Package Specification. 2005 http://www.imsglobal.org/profiles/

    60 Edd Dumbill. Finding Friends with XML and RDF. XML Watch (weblog). 2002. Accessed June 1, 2002. http://www-106.ibm. com/developerworks/xml/library/x-foaf.html

    61 A.L. Cervini, A.L. Network connections: an analysis of social software that turns online introductions into offline interactions. Master’s thesis, Interactive Telecommunications Program, NYU. 2003. http://stage.itp.tsoa.nyu.edu/alc287/thesis/thesis.html

    62 Light-weight Identity Description. Website. No longer extant. Original URL: 2005. http://lid.netmesh. org/

    63 Cervini, 2003.

    64 Albert-Laszlo Barabasi (2002), Linked: The New Science of Networks, Perseus Press, Cambridge, MA, p. 70.

    65 Clay Shirky. Power Laws, Weblogs, and Inequality. Weblog post. February 8, 2003.http://www.shirky.com/writings/powerlaw_weblog.html

    66 Stephen Downes. The Semantic Social Network. Stephen’s Web (weblog). February 14, 2004. http://www.downes.ca/post/46

    67 Tim Berners-Lee. Semantic Web XML2000. 2000. http://www.w3.org/2000/Talks/ 1206-xml2k-tbl/slide10-0.html

    68 Edd Dumbill. Berners-Lee and the Semantic Web Vision. XML.com (ezine). December, 2000. http://www.xml. com/pub/a/2000/12/xml2000/timbl.html

    Further reading

    Gladwell, M. (2000), The Tipping Point: How Little Things Can Make a Big Difference, Little, Brown & Company, Boston, MA, pp. 45-6.

    Palmer, S.B. (2001), The Semantic Web: An Introduction, available at: http://infomesh.net/2001/swintro/

    Vitiello, E. (2002), FOAF, available at: www.perceive.net/xml/foaf.rdf

     

    Moncton, October 10, 2005

     

     

    ‌The Space Between the Notes

    On reading Kathy Sierra…69

    We turn clay to make a vessel; but it is on the space where there is nothing that the usefulness of the vessel depends. – Tao Te Ching

    An old insight, often forgotten.

    Listening to the recent talks from TED, all these speakers were roaring along at top speed, delivering a hundred words a minute. In my own talks, I speak more slowly (something I learned to do to facilitate simultaneous translation). Why would a professional speaker move so quickly, I wondered, when greater comprehension comes from more paced delivery?

    Then I understood. A person who speaks quickly appears to be intelligent, appears to be worth listening to, appears, therefore, to be worth paying to speak. Every speech given by one of these speakers is an advertisement for the next.

    It’s the same with things, with objects. Greater accumulation conveys the greater appearance of worth. But the sheer mass of objects demonstrates that the only purpose of the one object is the obtaining of another.

    In this way, the filling of space results in emptiness. When the purpose of obtaining the one is only for obtaining the next, then you can never have anything.

    Footnotes

    69 Kathy Sierra. Hooverin’ and the space between notes. Creating Passionate Users (weblog). July 18, 2006. http://headrush.typepad.com/creating_passionate_users/2006/07/hooverin_and_th.html

    Moncton, July 19, 2006

     

     

    The Vagueness of George Siemens

    Posted to Half an Hour.

    I like George Siemens and he says a lot of good things, but he is often quite vague, an imprecision that can be frustrating. In his discussion70 of my work on connective knowledge, for example, he observes, “In this model, concepts are distributed entities, not centrally held or understood…and highly dependent on context. Simply, elements change when in connection with other elements.” What does he mean by ‘elements’? Concepts? Nodes in the network?

    Entities? You can’t just throw a word in there; you need some continuity of reference.

    Why is this important? Siemens dislikes the relativism that follows from the model. Fair enough; people disagreed with Kant about the noumenon71 too. But he writes, “I see a conflict with the fluid notions of subjectivity and that items are what they are only in line with our perceptions…and what items are when they connect based on defined characteristics (call them basic facts, if you will)” And I ask, what does he mean by ‘in line’ or ‘defined characteristics… basic facts’ – if they are defined, how can they be basic facts?

    Then he says, “I still see a role for many types of knowledge to hold value based on our recognition of what is there.” Now I’m tearing my hair. “Hold value?” What can he mean… does he know? Does he mean “‘Snow is white’ is ‘true’ if and only if ‘snow is white’?” Or is he simply kicking a chair and saying “Thus I refute Berkeley.” In which case I can simply recommend On Certainty72 (one of my favourite books in the world) and move along.

    He continues, “The networked view of knowledge may be more of an augmentation of previous categorizations, rather than a complete displacement.” Now I’m quite sure that’s not what he means. He is trying to say something like ‘knowledge obtained through network semantics does not replace knowledge obtained by more traditional means, but merely augments it.’ Fine – if he can give us a coherent account of the knowledge obtained through traditional means. But it is on exactly this point that the traditional theory of knowledge falters. We are left without certainty. You can’t “augment” something that doesn’t exist.

    Here is his main criticism: “At this point, I think Stephen confuses the original meaning inherent in a knowledge element, and the changed meaning that occurs when we combine different knowledge elements in a network structure.” Well I am certainly confused, but not, I think, as a result of philosophical error. What can Siemens possibly mean by ‘knowledge element’. It’s a catch-all term, that refers to whatever you want it to – a proposition, a concept, a system of categorization, an entity in a network. But these are very different things – statements about a ‘knowledge element’ appear true only because nobody knows what a ‘knowledge element’ is.

    He writes, “Knowledge, in many instances, has clear, defined properties and its meaning is not exclusively derived from networks…” What? Huh? If he is referring to, say, propositions, or concepts, or categorizations, this is exactly not true – but the use of the fuzzy ‘knowledge elements’ serves to preclude any efforts to pin him down on this. And have I ever said “meaning is derived from networks”? No – I would never use a fuzzy statement like ‘derived from’ (which seems to suggest, but not entail, some notion of entailment).

    He continues, “The meaning of knowledge can be partly a function of the way a network is formed…” Surely he means “the meaning of a item of knowledge,” which in turn must mean… again, what? A proposition, etc.? Then is he saying, “The meaning of a proposition can be partly a function of the way a network is formed…” Well, no, because it’s a short straight route to relativism from there (if the meaning of a proposition changes according to context, and if the truth of a proposition is a function of its meaning, then the truth of a proposition changes according to the way the network was form).

    What is Siemens’s theory of meaning? I’m sorry, but I haven’t a clue. He writes, “The fact that the meaning of an entity changes based on how it’s networked does not eliminate its original meaning. The aggregated meaning reflects the meaning held in individual knowledge entities.” An entity – a node in a network? No.

    He has to be saying something like this: for any given description of an event, Q, there is a ‘fact of the matter’, P, such that, however the meaning of Q changes as a consequence of its interaction with other descriptions D, it remains the case that Q is at least partially a function of P, and never exclusively of D. But if this is what he is saying, there is any number of ways it can be shown to be false, from the incidence of mirages and visions to neural failures to counterfactual statements to simple wishful thinking.

    But of course Siemens doesn’t have to deal with any of this because his position is never articulated any more clearly than ‘Downes says there is no fact of the matter, there is a fact of the matter, thus Downes is wrong’. To which I reply, simply, show me the fact of the matter.

    Show me one proposition, one concept, one categorization, one anything, the truth (and meaning) of which is inherent in the item itself and not as a function of the network in which it is embedded.

    Siemens says, introducing my work that I explore “many of the concepts I presented in Knowing Knowledge…and that others (notably Dave Snowden and Dave Weinberger) have long advocated – namely that the structured view of knowledge has given way to more diverse ways of organizing, categorizing, and knowing.”

    I don’t think this is true. Siemens, Snowden and Weinberger may all be talking about “more diverse ways of knowing” – but I am not talking about their ‘diverse ways of knowing’ but rather – as I have been consistently and for decades – on how networks learn things, know things, and do things.

    Footnotes

    70 George Siemens. Knowing Knowledge Discussion Forum. Website. http://www.knowingknowledge.com/ Specific citation no longer extant. Original link: http://knowingknowledge.com/2007/04/toward_a_future_knowledge_soci.php

    71 Wikipedia. Noumenon. Accessed April 19, 2007. http://en.wikipedia.org/wiki/Noumenon

    72 Ludwig Wittgenstein. On Certainty. Blackwell. January 16, 1991. http://books.google.ca/books/about/On_Certainty.html?id=ZGHG6WkVF5EC&redir_esc=y

    Moncton, April 19, 2007

     

    ‌Network Diagrams

    Created for Connectivism and Connective Knowledge #cck11

    Here is a selection of network diagrams:

    Web of Data. From Linked Data Meetup 73

    Last.fm Related Musical Acts. From Sixdegrees.hu74

    Map of Science. From Plos One,75 Clickstream Data Yields High-Resolution Maps of Science.

     

    Comment

    Bonni Stachowiak has left a new comment on your post “Network Diagrams”: LinkedIn just came out with an incredible way of visualizing your professional network connections, called InMaps.76

    Downes said… The lnMaps are here: http://inmaps.linkedinlabs.com/

    Footnotes

    73 Georgi Kobilarov. Meetup Group Photo Album. Web Of Data Meetup. January 21, 2010. http://www.meetup.com/Web-Of- Data/photos/807995/#12724766

    74 Sixdegrees.hu. Reconstructing the structure of the world-wide music scene with Last.fm. Undated, accessed January 24, 2011. http://sixdegrees.hu/last.fm/

    75 Bollen J, Van de Sompel H, Hagberg A, Bettencourt L, Chute R, et al. (2009) Clickstream Data Yields High-Resolution Maps of Science. PLoS ONE 4(3): e4803. http://www.plosone.org/article/info:doi/10.1371/journal.pone.0004803

    76 Bonni Stachowiak. Visualize your network connections #CCK11. Teaching in Higher Education (weblog), January 24, 2011. http://teachinginhighered.com/visualize-your-network-connections-cck11-0

    Moncton, November 14, 2010

     

    What Networks Have In Common

    David T. Jones asks,77 “Does connectivism conflate or equate the knowledge/connections with these two levels (“neuronal” and “networked”)? Regardless of whether the answer is yes or no, what are the implications that arise from that response?”

    The answer to the first question is ‘yes’, but with some caveats.

    The first caveat is expressed in several of my papers. It is that historically we can describe three major types of knowledge:

    • qualitative – i.e., knowledge of properties, relations, and other typically sensible features of entities
    • quantitative – i.e., knowledge of number, area, mass, and other features derived by means of discernment or division of entities within sensory perception
    • connective – i.e., knowledge of patterns, systems, ecologies, and other features that arise from the recognition of interactions of these entities with each other.

    (There is an increasing effect of context-sensitivity across these three types of knowledge. Sensory information is in the first instance context-independent, as (if you will) raw sense data, but as we begin to discern and name properties, context-sensitivity increases. As we begin to discern entities in order to count them, context-sensitivity increases further. Connective knowledge is the most context-sensitive of all, as it arises only after the perceiver has learned to detect patterns in the input data.)

    The second caveat is that there is not one single domain, ‘knowledge’, and, correspondingly, not one single entity, the (typically undesignated) knower. Any entity or set of entities that can (a) receive raw sensory input, and (b) discern properties, quantities and connections within that input, can be a knower, and consequently, know.

    (Note that I do not say ‘possess knowledge’. To ‘know’ is to be in the state of perceiving, discerning and recognizing. It is the state itself that is knowledge; while there are numerous theories of ‘knowledge of’ or ‘knowledge that’, etc., these are meta-theories, intended to assess or verify the meaning, veracity, relevance, or some other relational property of knowledge with respect to some domain external to that knowledge.)

    Given these caveats, I can identify two major types of knowledge, specifically, two major entities that instantiate the states I have described above as ‘knowledge’. (There are many more than two, but these two are particularly relevant for the present discussion).

    1. The individual person, which senses, discerns and recognizes using the human brain.
    2. The human society, which senses, discerns and recognizes using its constituent humans.

    These are two separate (though obviously related) systems, and correspondingly, we have two distinct types of knowledge, what might be called ‘personal knowledge’ and ‘public knowledge’ (I sometimes also use the term ‘social knowledge’ to mean the same thing as ‘public knowledge’).

    Now, to return to the original question, “Does connectivism conflate or equate the knowledge/connections with these two levels (‘neuronal’ and ‘networked’)?”, I take it to mean, “Does connectionism conflate or equate personal knowledge and public knowledge.”

    Are they the same thing? No.

    Are they each instances of an underlying mechanism or process that can be called (for lack of a better term) ‘networked knowledge’? Yes.

    Is ‘networked knowledge’ the same as ‘public knowledge’? No. Nor is it the same as ‘personal knowledge’. By ‘networked knowledge’ I mean the properties and processes that underlie both personal knowledge and public knowledge.

    Now to be specific: the state we call ‘knowledge’ is produced in (complex) entities as a consequence of the connections between and interactions among the parts of that entity.

    This definition is significant because it makes it clear that:

    • ‘knowledge’ is not equivalent to, or derived from, the properties of those parts.
    • ‘knowledge’ is not equivalent to, or derived from, the numerical properties of those parts

    Knowledge is not compositional, in other words. This becomes most clear when we talk about personal knowledge. In a human, the parts are neurons, and the states or properties of those neurons are electro-chemical potentials, and the interactions between those neurons are electro-chemical signals. Yet a description of what a person ‘knows’ is not a tallying of descriptions of electro-chemical potentials and signals.

    Similarly, what makes a table ‘a table’ is not derivable merely by listing the atoms that compose the table, and there is no property, ‘tableness’, inherent in each of those atoms. What makes a table a ‘table’ is the organization and interactions (which produce ‘solidity’) between those atoms. But additionally, ascription of this property, being a ‘table’, is context-dependent; it depends on the viewer being able to recognize that such-and-such an organization constitutes a table.

    A lot follows from this, but I would like to focus here on what personal knowledge and public knowledge has in common. And, given that these two types of knowledge result from the connections between the parts of these entities, the question now arises, what are the mechanisms by which these connections form or arise?

    There are two ways to answer this:

      • the connections arise as a result of the actual physical properties of the parts, and are unique to each type of entity. Hence (for example) the connections between carbon atoms that arise to produce various organizations of carbon, such as ‘graphite’ or ‘diamond’, are unique to carbon, and do not arise elsewhere
      • the connections arise as a result of (or in a way that can be described as (depending on whether you’re a realist about connections)) a set of connection-forming mechanisms that are common to all types of knowledge.

    Natural science is the domain of the former. Connective science (what we now call fields such as ‘economics’, ‘education’, ‘sociology’) is the domain of the latter.

    One proposition of connectivism (call it ‘strong connectivism’) is that what we call ‘knowledge’ is what connections are created solely as a result of the common connection-forming mechanisms, and not as a result of the particular physical constitution of the system involved. Weak connectivism, by contrast, will allow that the physical properties of the entities create connections, and hence knowledge, unique to those entities. Most people (including me) would, I suspect, support both strong and weak connectivism.

    The question “Does connectivism conflate or equate the knowledge/connections with these two levels” thus now resolves to the question of whether strong connectivism is (a) possible, and (b) part of the theory known as connectivism. I am unequivocal in answering ‘yes’ to both parts of the question, with the following caveat: the connection-forming mechanisms are, and are describable as, physical processes. I am not postulating some extra-worldly notion of ‘the connection’ in order to explain this commonality.

    These connection-forming mechanisms are well known and well understood and are sometimes rolled up under the heading of ‘learning mechanisms’. I have at various points in my writing described four major types of learning mechanisms:

      • Hebbian associationism – what wires together, fires together
      • Contiguity – proximate entities will link together are form competitive pools
      • Back Propagation – feedback; sending signals back through a network
      • Settling – eg., conservation of energy or natural equilibrium.

    There may be more. For example, Hebbian associationism may consist not only of ‘birds of a feather link together’ but also associationism of compatible types, as in ‘opposites attract’.

    What underlying mechanisms exist, what are the physical processes that realize these mechanisms, and what laws or principles describe these mechanisms, is an empirical question. And thus, it is also an empirical question as to whether there is a common underlying set of connection-forming mechanisms.

    But from what I can discern to date, the answer to this question is ‘yes’, which is why I am a strong connectivist. But note that it does place the onus on me to actually describe the physical processes that are instances of one of these four mechanisms (or at least, since I am limited to a single lifetime, to describe the conditions for the possibility of such a description).

    There is a separate and associated version of the question, “Does connectivism conflate or equate the knowledge/connections with these two levels,” and that is whether the principles of the assessment of knowledge are the same at both levels (and all levels generally).

    There are various ways to formulate that question. For example, “Is the reliability of knowledge- forming processes derived from the physical constitution of the entity, or is it an instance of an underlying general principle of reliability.” And, just as above, we can discern a weak theory, which would ground reliability in the physical constitution, and a strong theory, which grounds it in underlying mechanisms (I am aware of the various forms of ‘reliablism’ proposed by Goldman, Swain and Plantinga, and am not referring to their theory with this incidental use of the word ‘reliable’).

    As before, I am a proponent of both, which means there are some forms of underlying principles that I think inform the assessment of connection-forming mechanisms within collections of interacting entities. Some structures are more (for lack of a better word) ‘reliable’ than others.

    I class these generally as types of methodological principles (the exact designation is unimportant; Wittgenstein might call them ‘rules’ in a ‘game’). By analogy, I appear to the mechanisms we use to evaluate theories: simplicity, parsimony, testability, etc. These mechanisms do not guarantee the truth of theories (whatever that means) but have come to be accepted as generally (for lack of a better word) reliable means to select theories.

    In the case of networks, the mechanisms are grounded in a distinction I made above, that knowledge is not compositional. Mechanisms that can be seen as methods to define knowledge as compositional are detrimental to knowledge formation, while mechanisms that define knowledge as connective, are helpful to knowledge formation.

    I have attempted to characterize this distinction more generally under the heading of ‘groups’ and ‘networks’. In this line of argument, groups are defined compositionally – sameness of purpose, sameness of type of entity, etc., while networks are defined in terms of the interactions. This distinction between groups and networks has led me to identify four major methodological principles”

        • autonomy – each entity in a network governs itself
        • diversity – entities in a network can have distinct, unique states
        • openness – membership in the network is fluid; the network receives external input
        • interactivity – ‘knowledge’ in the network is derived through a process of interactivity, rather than through a process of propagating the properties of one entity to other entities

    Again, as with the four learning mechanisms, it is an empirical question as to *whether* these processes create reliable network-forming networks (I believe they do, based on my own observations, but a more rigorous proof is desirable), and I am by this theory committed to a description of the *mechanisms* by which these principles engender the reliability of networks.

    In the case of the latter, the mechanism I describe is the prevention of ‘network death’. Network death occurs when all entities are of the same state, and hence all interaction between them has either stopped or entered into a static or stead state. Network death is the typical result of what are called ‘cascade phenomena’, whereby a process of spreading activation eliminates diversity in the network. The four principles are mechanisms that govern, or regulate, spreading activation.

    So, the short answer to the first question is “yes”, but with the requirement that there be a clear description of exactly what it is that underlies public and personal knowledge, and with the requirement that it be clearly described and empirically observed.

    I will leave the answer to the second question as an exercise for another day.

    Downes said…

    • What are strong and weak connectivism?

    Let me give you an example.

    Salt is created by the forming of a link between an atom of sodium and an atom of chlorine. While bonds of this sort are common, they require that the two elements be of a specific type. If the elements are different, the resulting compound will not be salt, but something quite different.

    This is weak connectivism. The nature of the connection, and indeed, whether the connection will form at all, depends on the nature of the entities.

    Here’s another example. Birds (say, sparrows) will only mate with other birds. They will not mate with lizards. So no mating-connection will form between a bird and a lizard. So, an account of a network based on the mating habits of birds is a form of weak connectivism. The structure and shape of the network depends on the nature of its constituent parts.

    By contrast, if you talk about network formation without reference to the nature of the things connecting, that’s strong connectivism. If you simply think, for example, of the way any two atoms interact, or the way any two animals interact, then you’re talking about the nature of connections abstractly. That’s strong connectivism.

    No account of connectivism is purely strong connectivism or purely weak connectivism. All descriptions are a combination of both. Some descriptions rely more on the nature of the entities being connected, and so we call those examples of weak connectivism. Others emphasize more the nature of connections generally, and we can call that strong connectivism.

    Footnotes

    77 David T. Jones. A question (or two) on the similarity of “neuronal” and “networked” knowledge. The Weblog of (a) David Jones, March 5, 2011. http://davidtjones.wordpress.com/2011/03/05/a-question-or-two-on-the-similarity-of-neuronal-and-networked-knowledge/

    Moncton, February 27, 2011

     

    The Personal Network Effect

    The presumption in the design of most networks is that the value of the network increases with the number of nodes in the network. This is known as the Network Effect, a term that was coined by Robert Metcalfe78 the founder of Ethernet.

    79

    It is therefore tempting to suggest that a similar sort of thing holds for members of the network, that the value of the network is increased the more connections a person has to the network. This isn’t the case.

    Each connection produces value to the person. But the relative utility of the connection – that is, its value compared to the value that has already been received elsewhere – decreases after a certain point has been reached.

    The reason for this is that value is derived from semantic relevance. Information is semantically relevant only if it is meaningful to the person receiving it (indeed, arguably, it must be semantically relevant to be considered information at all; if it is not meaningful, then it is just static or noise).

    Semantic relevance is the result of a combination of factors (which may vary with time and with the individual), according to whether the information is:

    • new to the receiver (cf. Fred Dretske Knowledge and the Flow of Information)

    • salient to the receiver (there are different types of salience: perceptual salience, rule salience, semiotic salience, etc)

    • timely, that is, the information arrives at an appropriate time (before the event it advertises, for example) – this does not mean ‘soonest’ or ‘right away’

    • utile, that is, whether it can be used, whether it is actionable

    • cognate, that is, whether it can be understood by the receiver

    • true, that is, the information is consistent with the belief set of the receiver

    • trusted, that is, comes from a reliable source

    • contiguous, that is, whether the information is flowing fast enough, or as a sufficiently coherent body

    Because of these conditions, the value of each new piece of information, on average, will decrease relative to its predecessors. At a certain point, the value of the new information will be such that it actually detracts from the value of the information already received (by, say, blocking it, distracting one’s attention from it, contradicting it, and the like).

    For example, suppose someone tells you that the house is on fire. This is very relevant information, and quite useful to you. Then another person tells you on fire. It’s useful to have confirmation, but clearly not as useful as the first notice. Then a third and a fourth and a fifth and you want to tell people to shut up so you can hear the next important bit of information, namely, how to get out.

    This is the personal network effect. In essence, it is the assertion that, for any person at any given time, a certain finite number of connections to other members of the network produces maximal value. Fewer connections, and important sources of information may be missing. More connections, and the additional information received begins to detract from the value of the network.

    Most people can experience the personal network effect for themselves by participating in social networks. One’s Facebook account, for example, is minimally valuable when only a few friends are connected. As the number grows over 100, however, Facebook begins to become as effective as it can be. If you keep on adding friends, however, it begins to become less effective.

    This is true not only for Facebook but for networks in general. For any given network, for any given individual in the network, here will be a certain number of connections that produces maximum value for that member in that network.

    This has several implications.

    First, it means that when designing network applications, it is important to build in constraints that allow people to limit the number of connections they have. This is why the opt-in networks such as Facebook produce more value per message than open networks such as email.

    Imagine what Twitter would be like is anyone could send you a message! The value in Twitter lies in the user being able to restrict incoming messages to a certain set of friends.

    Second, it provides the basis for a metric describing what will constitute valuable communications in a network. Specifically, we want out communications to be new, salient, utile, timely, cognate, true and contiguous.

    Third, it demonstrates that there is no single set of best connections. A connection that is very relevant to one person might not be relevant to me at all. This may be because we have different interests, different world views, or speak different languages. But even if we have exactly the same needs and interests, we may get the same information from different sources. By the time your source gets to me, the ‘new’ information it gave you might be very ‘old’ to me.

    We see this phenomenon is web communities. Dave Warlick80today posted a link to a video81 produced by Michael Wesch’s Cultural Anthropology students at Kansas State University.

    Warlick obviously does not read OLDaily because I linked to the site two weeks ago.82 Warlick credits John Moranski, a school librarian from Auburn High School and Middle School in Auburn, Illinois (no link, which means he probably told him about it in person or by email).

    Warlick’s link, therefore, is of little value to me; it’s old news. However, to many of his readers (specifically, those who don’t read me), this will be new. And hence he is a valuable part of their network.

    Now here is the important part: the people who read Warlick don’t need to read me (at least with respect to this link). They are getting the same information either way. There is no particular reason to select one source over another. Warlick may be part of his readers by accident (he is the first ed tech person they read, for example) or he may be more semantically relevant to them for other reasons: he is a folksy storyteller, he writes in a simple vocabulary, they have met him personally and trust him, whatever.

    One final point: if we change the way we design the network, we can change the point of maximal value:

    It is toward this effect that much of my previous writing about networks has been directed. How can we structure the network in such a way as to maximize the maximal value? I have suggested four criteria: diversity, autonomy, openness, and connectedness (or interactivity).

    For example, networks that are more diverse – in which each individual has a different set of connections, for example – produce a greater maximal value than networks that are not.

    Compare a community of people where people only read each other. You can read ten people, say, of a fifty person community, and hear pretty quickly what every person is thinking. But reading an eleventh will produce almost no value at all; you will just be getting the same information you were already getting. Compare this to the value of a connection from outside the community. Now you are reading things nobody else has thought about; you learn new things, and your comments have more value to the community as a whole.

    It is valuable to have a certain amount of clustering in a network. This is a consequence of the criterion for semantic relevance. This is that people like Clark are getting at when they talk about the need for a common ground,83 or what Wenger means by a shared domain of interest.84 However, an excessive focus on clustering, on what I have characterized as group criteria, results in a decrease in the semantic relevance of messages from community members.

    Footnotes

    78 Wikipedia. Robert Metcalfe. Accessed November 4, 2007. http://en.wikipedia.org/wiki/Robert_Metcalfe

    79 Dion Hinchcliffe . Web 2.0’s Secret Sauce: Network Effects. Social Computing Magazine. July 15, 2006. http://web2.socialcomputingmagazine.com/web_20s_real_secret_sauce_ network_effects.htm

    80 Dave Warlick. Another Amazing Video About Teaching and Learning. 2¢ Worth (weblog). November 4, 2007. http://davidwarlick.com/2cents/?p=1240

    81 Michael Wesch. A Vision of Students Today. YouTube (video). October 12, 2007. http://www.youtube.com/watch?v=dGCJ46vyR9o

    82 Stephen Downes. A Vision of Students Today. OLDaily (weblog). October 15, 2007. http://www.downes.ca/post/42024

    83 John Black. Creating a Common Ground for URI Meaning Using Socially Constructed Web sites. WWW 2006, May 23-26, 2006, Edinburgh, Scotland. http://www.ibiblio.org/hhalpin/irw2006/jblack.html

    84 Etienne Wenger. Communities of Practice: A Brief Introduction. June, 2006. http://www.ewenger.com/theory/

    Moncton, November 4, 2007

    Diagrams and Networks

     

    Responding to Paul Ellerman:85

    I was actually pretty careful with the diagrams,86 though on reflection I considered that I should have, for the network diagram, used the standard connectionist (neural network) diagram. See

    e.g. this.87

     

    Now in fact there are even in networks people like myself, Will Richardson and Dave Warlick, and they are sometimes called “leaders”. But from the perspective of a network, what makes an entity emphasized in this way is the number and nature of the connections it has, and not any directive import. People like Will, Dave and I stand out because we are well-connected, and not (necessarily) because we are well informed, and certainly not (necessarily) because other people do what we say they should do.

    There are in fact two major ways that such people can emerge in a network:

    First, as a consequence of the power law phenomenon. This is discussed at length in the discussion of scale-free networks. It is essentially the first-mover advantage. The person who was in the network first is more likely to attract more links. This is also impacted by advertising and self-promotion, phenomena I would not disassociate with the list of names you provide.

    Second, as a consequence of the bridging phenomenon. Most networks occur in clusters (prototypes of Wenger’s communities of practice) of like-minded individuals. Philosophers of science, say, or naturalistic poets, or the F1 anti Michael Schumacher hate club. Some people, though, have their feet in two such clusters. They like both beat poetry and the Karate Kid. And so they act as a conduit of information between those two groups, and hence, obtain greater recognition.

    In neither case is the person in question a ‘leader’ in anything like the traditional sense. The person does not have ‘followers’ of the usual sort (though they may have fans, but they most certainly don’t have ’staff’ – at least, not as a consequence of their network behaviour). They do not ‘lead’ – they do not tell people what to do. At best and at most, they exemplify the behaviour they would like to see, and at best and at most, they act as a locus of information and conversation.

    In the diagram, this difference is represented by depicting the ‘traditional’ leader and group as a ‘tree’, with one person connected to a number of people, while at the same time depicting the network as a ‘cluster’, with many people connected to each other.

    Pushing the model back to the ‘leader’ mode suggested by this item would push the diagram back to a ‘cluster’ or ‘hub and spoke’ model such as this88

     

     

    which I was anxious to avoid. Not because it doesn’t depict a network (it does, technically, a scale-free network) but because it depicts what I would call a ‘group’ dominated by ‘leaders’, where the leaders have a directive function.

     

     

     

    Footnotes

    85 Paul Ellerman. Thinking About Networks Part 2. Thoughts on Training, Teaching and Technology. October 2, 2006 http://www.downes.ca/post/35916 Original link (n o longer extant): http://tottandt.wordpress.com/2006/10/02/thinking-about-networks-part- 2/#comment-17

    86 Stephen Downes. Groups and Networks. Stephen’s Web (weblog). September 25, 2006. http://www.downes.ca/post/35866

    87 E-Sakura System. Images. Probably not the original source, but that’s where I got it. http://www.e-orthopaedics.com/sakura/ Image URL: http://www.e-orthopaedics.com/sakura/images/neural.gif

    88 Valdis Krebs. Decision-Making in Organizations. Orgnet.com. 2008. http://orgnet.com/decisions.html

    Moncton, October 02, 2006

     

    The Blogosphere is a Mesh

    I said

           You say “Wrong both descriptively – it’s not what the blogosphere actually looks like… What we are more like … is a mesh, and not a hub-and-spokes network.”

    And Mark Berthelemy asked89

              I’d be very interested to know the evidence for that statement.

    My evidence is that this is what I see, and that if you looked at it from the same perspective, you would see it too.

    Yes, you could measure it ’empirically’ via a formal study, but (as I have commented on numerous occasions) you tend to find whatever you’re looking for with such studies.

    For example, you could do a Technorati sort of survey and list all of the blogs that link to each other. From this, you could construct a social network graph. And that graph would show what the link cited in this thread shows, that there is a power-law distribution and therefore a hub- and-spoke structure.

    And thus you would have found what you were looking for.

    And yet, from my perspective – as a hub – I see remarkably little traffic flowing through me. How can this be?

    The edublogosphere – and the wider blogosphere – isn’t constructed out of links. The link is merely one metric – a metric that is both easy to count and particularly susceptible to power-law structuring. Links play a role in discovery, but a much smaller role in communication.

    We can identify one non-link phenomenon immediately, by looking at almost any blog. After any given post, you’ll see a set of comments. Look at this post of Will Richardson’s90. There’s a set of 25 comments following. And the important thing here is that these comments are communications happening in a social space. They are one-to-many communications. This forms a little cluster of people communicating directly with each other.

    Now look at any social network, say del.icio.us.91 This tool was ranked second92 on a list composed mostly of inputs from edubloggers. People link to each other on social networks. Each person keeps his or her own list of ‘buddies’. Here’s mine.93 Empty; I don’t use del.icio.us much. Here’s someone else’s network.94 Edubloggers are using dozens of networks – Friendster, Bebo, Facebook, Myspace, Twitter and more.

    But that’s not all. A lot of the chatter I see going on between people I’m connected to is taking place via email, Skype, instant messaging, and similar person-to-person messaging tools.

    People put people on their ‘buddy lists’ that they want to call and to hear from. They collect email addresses (and white-list them in their spam filters).

    Communications maps are typically clustered.95 Like so:

     

    The result is also observable. You get a clustering of distinct groups of people with particular interests. In the edublogosphere, for example, I can very easily identify the K12 crowd, the corporate e-learning bloggers, the college and university bloggers, the webheads (ESL), and various others.

    This diagram96 is well known: it charts linkages between books read by bloggers:

     

    This chart is semantic; that is, it depicts what the people talked about. This tells you about the flow of ideas, and not just the physical connections. And when we look at the flow of ideas, we see the characteristic cluster formation.

    The network of people97 who talk about engineering is, similarly, a cluster:

     

    Another way to spot the blogging network is to look at conference attendance. You can again find these clusters. I don’t have diagrams of the edubloggers, but this conference attendee network of Joi Ito’s is typical:98

     

    If we focus, not on a single physical indicator, but on the set of interactions taken as a whole, it becomes clear that the blogosphere is in fact a cluster-style network, and not a hub-and-spoke network. Bloggers form communities among themselves and communicate using a variety of tools, of which their blogs constitute only one.

    Footnotes

    89 Mark Berthelemy. Comment to Stephen Downes, Top Edublogs – August 2007. August 20, 2007. http://www.downes.ca/post/41338

    90 Will Richardson. The Future of Teaching. Weblogg-Ed (weblog). August 15, 2007. http://weblogg-ed.com/2007/the-future-of-teaching/

    91 Delicious. Website. http://del.icio.us/

    92 Jane Hart. Top 100 Tools for Learning. Centre for Learning & Performance Technologies. 2007. http://c4lpt.co.uk/top-tools/top-100-tools/ Original link (no longer extant): http://www.c4lpt.co.uk/recommended/top100.html

    93 Downes. Tag Search. Delicious. http://delicious.com/network/Downes

    94 Delicious. Peartree4. Tag Search. Was full of links when originally referenced August 20, 2007. http://delicious.com/network/peartree4

    95 Valdis Krebs, et.al. Social Network Interaction Will Become… Network Weaving. May, 2007. http://www.networkweaver.blogspot.ca/ Original link no longer extant: http://www.networkweaving.com/blog/2007/05/social-network-interaction-will-become.html

    96 Valdis Krebs. Divided We Stand? Orgnet.com (weblog). 2003. http://www.orgnet.com/leftright.html

    97 Erik van Bekkum. Visualization of engineering community of practice. Efios (website). October 7, 2006.

    98 Joichi Ito. Social Network Diagram for ITO JOICHI. Joi Ito (weblog). August 7, 2002. http://joi.ito.com/weblog/2002/08/07/social- network.html

    Moncton, August 20, 2007

     

    ‌The Google Ecosystem

    Posted to Google+

     

    This is an illustration of the Google Plus Ecosystem I created to try to explain the flow of information through Google Plus from its (currently undocumented) sources through to its (currently broken) output.

     

    Moncton, July 9, 2011

     

    What Connectivism Is

     

    Posted to the Connectivism Conference forum99

    At its heart, connectivism is the thesis that knowledge is distributed across a network of connections, and therefore that learning consists of the ability to construct and traverse those networks.

    It shares with some other theories a core proposition, that knowledge is not acquired as though it were a thing. Hence people see a relation between connectivism and constructivism or active learning (to name a couple).

    Where connectivism differs from those theories, I would argue, is that connectivism denies that knowledge is propositional. That is to say, these other theories are ‘cognitivist’, in the sense that they depict knowledge and learning as being grounded in language and logic.

    Connectivism is, by contrast, ‘connectionist’. Knowledge is, on this theory, literally the set of connections formed by actions and experience. It may consist in part of linguistic structures, but it is not essentially based in linguistic structures, and the properties and constraints of linguistic structures are not the properties and constraints of connectivism or connectivist knowledge.

    In connectivism, a phrase like ‘constructing meaning’ makes no sense. Connections form naturally, through a process of association, and are not ‘constructed’ through some sort of intentional action. And ‘meaning’ is a property of language and logic, connoting referential and representational properties of physical symbol systems. Such systems are epiphenomena of (some) networks, and not descriptive of or essential to these networks.

    Hence, in connectivism, there is no real concept of transferring knowledge, making knowledge, or building knowledge. Rather, the activities we undertake when we conduct practices in order to learn are more like growing or developing ourselves and our society in certain (connected) ways.

    This implies a pedagogy that (a) seeks to describe ‘successful’ networks (as identified by their properties, which I have characterized as diversity, autonomy, openness, and connectivity) and

    (b) seeks to describe the practices that lead to such networks, both in the individual and in society (which I have characterized as modeling and demonstration (on the part of a teacher) and practice and reflection (on the part of a learner)).

     

    Response to comments by Tony Forster

    A link to my paper ‘An Introduction to Connective Knowledge’100 will help with some of the comments in this post (long, sorry).

    Tony writes, “Knowledge is not learning or education, and I am not sure that Constructivism applies only to propositional learning nor that all the symbol systems that we think with have linguistic or propositional characteristics.”101

    I think it would be very difficult to draw out any coherent theory of constructivism that is not based on a system with linguistic or propositional characteristics. (or as I would prefer to say, a ‘rule-based representational system’).

    Tony continues, “The Constructivist principle of constructing understandings is an important principle because it has direct implications for classroom practice. For me it goes much further than propositional or linguistic symbol systems.”

    What is it to ‘construct an understanding’ if it does not involve:

    • a representational system, such as language, logic, images, or some other physical symbol set (i.e., a semantics)
    • rules or mechanisms for creating entities in that representational system (i.e., a syntax)?Again, I don’t think you get a coherent constructivist theory without one of these. I am always open to be corrected on this, but I would like to see an example.Tony continues, “I am disturbed by your statement that ‘in connectivism, there is no real concept of transferring knowledge, making knowledge, or building knowledge’. I believe that if Connectivism is a learning theory and not just a connectedness theory, it should address transferring understand, making understanding and building understanding.”This gets to the core of the distinction between constructivism and connectivism (in my view, at least).In a representational system, you have a thing, a physical symbol, that stands in a one-to-one relationship with something: a bit of knowledge, an ‘understanding’, something that is learned, etc.In representational theories, we talk about the creation (‘making’ or ‘building’) and transferring of these bits of knowledge. This is understood as a process that parallels (or in unsophisticated theories, is) the creation and transferring of symbolic entities.
      Connectivism is not a representational theory. It does not postulate the existence of physical symbols standing in a representational relationship to bits of knowledge or understandings. Indeed, it denies that there are bits of knowledge or understanding, much less that they can be created, represented or transferred.This is the core of connectivism (and its cohort in computer science, connectionism). What you are talking about as ‘an understanding’ is (at a best approximation) distributed across a network of connections. To ‘know that P’ is (approximately) to ‘have a certain set of neural connections’.To ‘know that P’ is, therefore, to be in a certain physical state – but, moreover, one that is unique to you, and further, one that is indistinguishable from other physical states with which it is co- mingled.Tony continues, “Connectivism should still address the hard struggle within of deep thinking, of creating understanding. This is more than the process of making connections.”No, it is not more than the process of making connections. That’s why learning is at once so simple it seems it should be easily explained and so complex that it seems to defy explanation (cf. Hume on this). How can learning – something so basic that infants and animals can do it – defy explanation? As soon as you make learning an intentional process (that is, a process that involves the deliberate creation of a representation) you have made these simple cases difficult, if not impossible, to understand.That’s why this is misplaced: “For example, we could launch into connected learning in a way which forgets the lessons of constructivism and the need for each learner to construct their own mental models in an individualistic way.”The point is:
    • there are no mental models per se (that is, no systematically constructed rule-based representational systems)
    • and what there is (i.e., connectionist networks) is not built (like a model) it is grown (like a plant)

    When something like this is said, even basic concepts as ‘personalization’ change completely.

    In the ‘model’ approach, personalization typically means more: more options, more choices, more types of tests, etc. You need to customize the environment (the learning) the fit the student.

    In the ‘connections’ approach, personalization typically means less: fewer rules, fewer constraints. You need to grant the learner autonomy within the environment.

    So there’s a certain sense, I think, in which the understandings of previous theories will not translate well into connectivism, for after all, even basic words and concepts acquire new meaning when viewed from the connectivist perspective.

    Response (1) to Bill Kerr

    Bill Kerr writes, “It seems that building and metacognition are talked about in George’s version but dismissed or not talked about in Stephen’s version.”

    Well, it’s kind of like making friends.

    George talks about deciding what people make useful friends, how to make connections with those friends, building a network of those friends.

    I talk about being open to ideas, communicating your thoughts and ideas, respecting differences

    and letting people live their lives.

     

    Then Bill comes along and says that George is talking about making friends but Stephen just ignores it.

    Bill continues, “Either the new theory is intended to replace older theories… Or, the new theory is intended to complement older theories. By my reading, Stephen is saying the former and George is saying the latter but I’m not sure.”

    We want to be more precise.

    Any theory postulates the existence of some entities and the non-existence of others. The most celebrated example is Newton’s gravitation, which postulated the existence of ‘mass’ and the non-existence of ‘impetus’.

    I am using the language of ‘mass’. George, in order to make his writing more accessible, (sometimes) uses the language of ‘impetus’. (That’s my take, anyways).

    Response (2) to Bill Kerr

    Bill Kerr writes, “Words / language are necessary to sustain long predictive chains of thought, eg. to sustain a chain or combination of pattern recognition. This is true in chess, for example, where the player uses chess notation to assist his or her memory.”

    This is not true in chess.

    I once played a chess player who (surprisingly to me) turned out to be far my superior (it was a long time ago). I asked, “how do you remember all those combinations?”

    He said, “I don’t work in terms of specific positions or specific sequences. Rather, what I do is to always move to a stronger position, a position that can be seen by recognizing the patterns on the board, seen as a whole.”

    See, that’s the difference between a cognitivist theory and a connectionist theory. The cognitivist thinks deeply by reasoning through a long sequence of steps. The non-cognitivist thinks deeply by ‘seeing’ more intricate and more subtle patterns. It is a matter of recognition rather than inference.

    That’s why this criticism, “Words / language are necessary to sustain long predictive chains of thought,” begs the question. It is levelled against an alternative that is, by definition, non-linear, and hence, does not produce chains of thought.

    Response (3) to Bill Kerr

    Bill Kerr writes, “I don’t see how what you are saying is helpful at the practical level, the ultimate test for all theories.”

    This is kind of like saying that the theory of gravity would not be true were there no engineers to use it to build bridges.

    This is absurd, of course. I am trying to describe how people learn. If this is not ‘practical’, well, that’s not my fault. I didn’t make humans.

    In fact, I think there are practical consequences, which I have attempted to detail at length elsewhere,102 and it would be most unfair to indict my own theoretical stance without taking that work into consideration.

    I have described, for example, the principles that characterize successful networks in my recent paper103 presented to ITForum (I really like Robin Good’s presentation104 of the paper – much nicer layout and graphics). These follow from the theory I describe and inform many of the considerations people like George Siemens have rendered into practical prescriptions.

    And I have also expounded, in slogan form, a basic theory of practice: ‘to teach is to model and demonstrate, to learn is to practice and reflect.’

    No short-cuts, no secret formulas, so simple it could hardly be called a theory. Not very original either. That, too, is not my fault. That’s how people teach and learn, in my view.

    Which means that a lot of the rest of it (yes, including ‘making meaning’) is either (a) flim- flammery, or (more commonly) (b) directed toward something other than teaching and learning. Like, say power and control.

    Bill continues, “Stephen, your position on intentional stance sounds similar to Churchland’s position on eliminative materialism.”105

    Quite right, and I have referred to him (Churchland) in some of my other work.

    “Other materialist philosophers, such as Dennett, argue that we can discuss in terms of intentional stance106 provided it doesn’t lead to question begging interpretations.”

    Well, yes, but this is tricky.

    It’s kind of like saying, “Well, for the sake of convenience, we can talk about fairies and pixie dust as though they are the cause of the magical events in our lives.” Call it “the magical stance”.

    But now, when I am given a requirement to account for the causal powers of fairies, or when I need to show what pixie dust is made of (at the cost of my theory being incoherent) I am in a bit of a pickle (not a real pickle, of course).

    The same thing for “folk psychology” – the everyday language of knowledge and beliefs Dennett alludes to. What happens when these concepts, as they are commonly understood, form the foundations of my theory?

    “Knowledge is justified true belief,” says the web page.107 Except, it isn’t. The Gettier problems108 make that pretty clear. So when pressed to answer a question like, ‘what is knowledge’ (as though it could be a thing) my response is something like “it’s a belief we can’t not have.” Like ‘knowing’ where Waldo is in the picture after we’ve found him. It’s like recognition. And what is ‘a belief’? A certain set of connections in the brain. Except now that these statements entail that there is no particular thing that is ‘a bit of knowledge’ or ‘a belief’.

    Yeah, you can talk in terms of knowledge and beliefs. But it requires a lot of groundwork before it becomes coherent.

    Bill continues, “Even though we don’t understand ‘constructing meaning’ clearly we can still advise students in certain ways that will help them develop something that they didn’t have before.”

    What, like muscles?

    Except, they always had muscles.

    Better muscles? Well, ok. But then what do I say? “Practice.”

    “I think it’s more useful and practical to operate on that basis, for example, Papert’s advice on ‘learning to learn’ which he called mathetics still stands up well.”

    But what if they’re wrong? What if they are exactly the wrong advice? Or moreover, what if they have to do with the structures of power and control that have developed in our learning environments, rather than having anything to do with learning at all?

    “Play is OK” has to do with power and control, for example. “Play fosters learning” is a different statement, much more controversial, and yet more descriptive, because play is (after all) practice.

    “The emotional precedes the cognitive.” Except that I am told by psychologists that “the fundamental principle underlying all of psychology is that the idea – the thought – precedes the emotion.”

    And so on. Each of these aphorisms sound credible, but when held up to the light, are not well- grounded. And hence, not practical.

    Footnotes

    99 George Siemens. Forum. Online Connectivism Conference. February 1, 2007. http://ltc.umanitoba.ca/moodle/mod/forum/discuss.php?d=12#385

    100 Stephen Downes. An Introduction to Connective Knowledge. Stephen’s Web (weblog). December 22, 2005. http://www.downes.ca/post/33034

    101 Unlinked quotes in this article are all from the Connectivism Conference forum, http://ltc.umanitoba.ca/moodle/mod/forum/discuss.php?d=12#385

    102 Stephen Downes. Stephen’s Web (website). http://www.downes.ca

    103 Stephen Downes. Learning Networks and Connective Knowledge. IT Forum (website). October 16, 2006. http://it.coe.uga.edu/itforum/paper92/paper92.html

    104 Robin Good. Learning Networks + Knowledge Exchange = Learning 2.0. Kolabora (website). October 20, 2006. http://www.kolabora.com/news/2006/10/20/learning_networks_knowledge_exchange.htm

    105 Wikipedia. Eliminative Materialism. Accessed February 3, 2007. http://en.wikipedia.org/wiki/Eliminative_materialism

    106 Wikipedia. Intentional Stance. Accessed February 3, 2007. http://en.wikipedia.org/wiki/Intentional_stance

    107 Tony Forster. OLCC2007 – Knowledge and Learning. Weblog. February 3, 2007. http://tonyforster.blogspot.com/2007/02/olcc2007- knowledge-and-learning.html

    108 Wikipedia. Gettier Problem. Accessed February 3, 2007. http://en.wikipedia.org/wiki/Gettier_problem

    Moncton, February 03, 2007

     

    Connectivism and its Critics: What Connectivism Is Not

     

    Posted to the CCK08 Blog, September 10, 2008.

    There are some arguments that argue, essentially, that the model we are demonstrating here would not work in a traditional academic environment.

    • Lemire109
    • Fitzpatrick110
    • Kashdan111

    These arguments, it seems to me, are circular. They defend the current practice by the current practice.Yes, we know that in schools and universities students are led through a formalized and designed instructional process. We understand that some students prefer it that way, that some academics are more comfortable with the format, that most institutions require the practice.But none of this proves that the current practice is *better* that what is being described and demonstrated here. Our argument, which will be unfolded through the twelve weeks of this course, is that connectivism is at least as well justified and well reasoned as current practice. And the practice, demonstrated through this course, shows that it works.Right now we are engaged in the process of defining what connectivism is. Perhaps it may be relevant for a moment to say what it is not.

    George Siemens offers a useful chart112 comparing Connectivism with some other theories. From this, we can see that, according to connectivism:

    • learning occurs as a distributed process in a network, based on recognizing and interpreting patterns
    • the learning process is influenced by the diversity of the network, strength of the ties
    • memory consists of adaptive patterns of connectivity representative of current state
    • transfer occurs through a process of connecting
    • best for complex learning, learning in rapidly changing domains

    Now I would add to or clarify each of these points (that would be another paper. For example, I would say that the learning process is influenced by the four elements of the semantic condition (diversity, autonomy, openness, connectedness), that while memory is adaptive, it is not (necessarily) representative, and that learning, on this theory, isn’t ‘transferred’, but grown anew by each learner.)

    But despite these clarifications, we can see pretty easily from this description what connectivism is not (and, more importantly, what it is not intended to be):

    • learning it is not structured, controlled or processed. Learning is not produced (solely or reliably) through some set of pedagogical, behavioral, or cognitive processes.
    • learners are not managed through some sort of motivating process, and the amount of learning is not (solely or reliably) influenced by motivating behaviours (such as reward and punishment, say, or social engagement)
    • learners do not form memories through the storage of ‘facts’ or other propositional entities, and learning is not (solely or reliably) composed of mechanisms of ‘remembering’ or storing such facts
    • learners do not ‘acquire’ of ‘receive’ knowledge; learning is not a process of ‘transfer’ at all, much less a transfer than can be caused or created by a single identifiable donor
    • learning is not the acquisition of simple and durable ‘truths’; learners are they are expected to be able to manage complex and rapidly changing environment

    The reason I take some pains here to describe what connectivism is not is that it should now be clear that none of these constitutes an argument against connectivism.

    In one critique, for example, we read “I think this open ended process can lead to some educational chaos and we need to be careful of that.” (Kashan)

    As we have seen in this course, the connectivist approach can pretty reliably lead to chaos. But this is because we believe that learning it is not structured, controlled or processed. And we expect students to be able to manage complex and rapidly changing environment – in other words, to be able to manage through just the sort of chaos we are creating.

    Saying that “can lead to some educational chaos” is therefore not a criticism of connectivism.

    To be sure, educational chaos does not work well in traditional learning and existing academic institutions. So much the worse (we say) for traditional learning and existing academic institutions.

    One might ask, then, what we expect traditional learning and existing academic institutions to look like in a connectivist world. Well, some of that was touched on in my presentation to eFest113 (to be posted later) this week.

    The model of learning we have offered through this course intersects with the traditional model at least through the definition and provision of assignments for evaluation. These, which are openly defined (everybody can see them), are applied to students who have registered for the course for grading and credit.

    We have already spoken with some students about applying the learning done in this course for credit elsewhere. If, say, a person in another country completes our assignments, and they are graded by a professor in some other institution, then that is just fine with us, and has served our interest of providing more open access to education.

    There is no reason for the delivery of instruction (whatever form it may take) to be conjoined with the more formal and institutionally-based assessment of instruction. Which means that we can offer an open, potentially chaotic, potentially diverse, approach to learning, and at the same time employ such a process to support learning in traditional institutions.

    As George has said, we are doing for the delivery of instruction what MIT OpenCourseWare has done for content. We have opened it up, and made it something that is not only not institutionally bound, but something that is, to a large degree, created and owned by the learners engaged in this instructional process.

    There is nothing in traditional institutions – except, perhaps, policy – that prevents this model from working. The criticisms of this model that are based on pragmatics and practicality are not sound. They achieve their effectiveness only by assuming what they seek to prove.

    Engagement with, and opposition to, the process described by connectivism will have to take place at a deeper level. Critics will need to show why a linear, orderly process is the only way to learn, to show why learners should be compelled, and then motivated, to follow a particular program of studies.

    We are prepared to engage in such discussions.

    But a discussion rooted in the traditional institution must allow and acknowledge that connectivism, if adopted, would change existing institutions, and to base its reasoning in the desirability or the effectiveness of such changes, and not merely the fact that they haven’t happened yet.

    Footnotes

    109 Daniel Lemire. Comment to ‘CCK08 First Impressions’. Stephen’s Web, September 8, 2008. http://www.downes.ca/cgi- bin/page.cgi?post=46013

    110 Catherine Fitzpatrick. Comment to ‘What Connectivism Is’. September 9, 2008. http://halfanhour.blogspot.com/2007/02/what-connectivism- is.html

    111 Kashdan. Comment to CCK08 Moodle Forum. No longer accessible. Original URL: http://ltc.umanitoba.ca:83/moodle/mod/forum/discuss.php?d=641

    112 George Siemens. What Is Connectivism. Google Docs. September 12, 2009. What is Connectivism?

    113 Stephen Downes. MOOC and Mookies: The Connectivism & Connective Knowledge Online Course. Seminar presentation delivered to eFest, Auckland, New Zealand by Elluminate. Stephen’s Web Presentations. September 10, 2008. http://www.downes.ca/presentation/197

    Moncton, September 10, 2008

     

    ‌Connectivism and Transculturality

     

    Transcript of my talk114 delivered to Telefónica Foundation, Buenos Aires, Argentina.

     

    One. Communities as Networks

    As you can see we are broadcasting live to a worldwide audience of 27 people. We’re just sort of messing around here. This is totally impromptu, I didn’t advertise this or anything. This is the sort of concept I want to talk about today, doing this kind of thing.

    Alejandro mentioned edupunk in his introduction. You know, it’s this whole idea of “doing it yourself” and making it happen for yourself rather than depending on organizers or others to do it for you. And we sort of asked, a little bit, whether we were allowed to do this sort of thing, but we just found an ambient wireless connection in the room, I determined very quickly that it was a very good connection, I was very happy about that, and so consequently I figured, “Great!

    We’ll live stream it. Why not?” And we’ll record the video, and I’m also recording the audio, and we’re recording the Spanish translation in the back of the room there, and so, we’re creating learning objects on the fly.

    And this is an important fundamental lesson, because, you know, the topic of the talk is culturality, and connectivism. I don’t talk about culturality a lot, and I think maybe I should talk about it a bit more. But I think the first and most important less of culture is that it belongs to the people, it belongs to us, it is what we make it, and we have tools now more than ever than we did in the past to make culture.

    When we think about culture and when we think about things like community or nationality or even language and linguistic groups like we do a lot in my own country we tend to think of these things as undivided wholes, of instances of commonality, where everybody is the same in some sort of central essential way. Think about the nationalism of being English or French or whatever, the idea is that all the English people have to share the same sort of English cultural values, all the French people share slightly different cultural values.

    But you know, when you look at these elements of culture, and you study these elements of culture, you find that culture does not break down into this nice, neat set of groups and categories. Look at this network up here on the screen. This is a network of western European languages. And the main thing you should see, and this is something you know already, is that all these languages are related to each other and they all derive from versions of each other.

    This is only a very partial chart of these language, and really, if you were going to draw a full chart of these languages, this is just the derivation of different languages from their sources, but if you were to map the influence from one language to another language to another language you would get a very complex diagram.

    And in fact, the culture of any individual person is composed not of metallic-like elemental forms of being indivisible and separate and pure. The culture of any individual, any person you care to name, all of you, myself, Alejandro, the people at the back of the room, the guy recording the video, this culture is a mixture of all cultures, each to a different degree, and all these cultures are related to each other the way the words in a language are related to each other. And this is true not just of language but of culture generally.

    I love this diagram. This is one of my favourite diagrams in the entire world. It’s almost impossible to read. On the slide it comes out a bit better. All of those dots, each dot represents a scholarly domain. Philosophy is there, we’ve got child psychology over here, we’ve got anthropology, archaeology, up there we’ve got electro-chemistry, polymers, organic chemistry, down here in the green we’ve got psychology – this is a map, on the screen (if you look at the slides later you’ll see there are little white lines connecting all these dots) is a huge network, and this is an actual representation of papers in one discipline that cite or refer to papers in another discipline.

    When we think of a domain or discipline like psychology, say, or geography, our first instinct is to think of it like a culture or a nationality, you know, like you or me, we’re inclined to say, “Those geographers, they’re a strange bunch.” Or “Engineers can’t be trusted.” Right? But really, when you look at the composition of these disciplines, they are composed of links to the other.

    So culture and discipline and a wide variety of social and community phenomena like that are based not in some sort of essential nature of being, but rather based in connections with other entities of a similar type. They’re clusters located within a network rather than stand-along “we are all united” kind of groups. And it’s an interesting picture, and I think it’s an important picture, because this runs directly contrary to the sort of nationalism or regionalism or communitarianism that you might see depicted in your newspaper and used for not-so-savory political purposes.

    We are more alike than we are different from other cultures. And the things that divide us really can be defined through these links. If we understand that our culture, that our nationality, our language, that our domains or disciplines, are clusters in a network then we think of these things differently.

    I have a diagram in Spanish, how about that? I’m not going to spend a whole lot of time on this, you can come back to the slides and find this, but this is a diagram I created in New Zealand and I originally created it in English and then it was kindly translated for me into Spanish (and also he made it look a lot nicer, the writing is a lot nicer). And what I want to do with this diagram is to draw what I think is a fundamental distinction. Now this distinction is not unique to me, I did not invent this distinction, I’ve just drawn it out, and tried to clarify it, for my own thinking. And it’s not necessarily The Way of the World, it’s just a way of looking at these sorts of issues.

    On the left we have ‘groups’, and this is your traditional nationalistic “we are all the same” kind of definition of community, and on the right we have ‘networks’, and this is what I believe is a more appropriate, more reflective, more realistic description of culture and community. There are different dimensions across these, and I identify four major ones, which I’ll talk about briefly in the next few slides.

    Two. Four Dimensions of Networks

    The first major dimension is diversity. And, one of the major things that I’ve inherited from my culture in Canada is a desire for diversity, and this is a value that has been deliberately shared and talked about and created and generated across Canada, and I’ve always felt and always believed that you need a mixture of materials, you need a collection of different perspectives, different points of view, in order to come to any new understanding.

    Think of a conversation. Think of a discussion you have back and forth with your friends. And probably in that discussion you will create new knowledge, right? And that’s the whole idea of having such a discussion. But suppose you knew exactly the same things as your friend. What would you and your friend talk about? You know all the same things. You think all the same things. You know, one person would say something, blah blah blah, “Yeah, I know that,” blah blah blah, “Yeah, I knew that.”

    You need to have different perspectives and points of view even to have a conversation, much less to create a community or to create new knowledge. [Here I’m thinking of Dretske] Diversity is essential to community. So what’s important about a community is not the way everybody is the same. What’s important about a community is the way everybody is different, and able to connect to each other.

    Good communities are open. Closed systems, closed communities become stagnant. [Here I’m thinking of Marquez] Imagine if nobody was allowed to come in or out of the city, even a big city like Buenos Aires. Eventually you would lock out or freeze up the source of new ideas. It would become like an echo chamber. The system becomes clogged with the “creative product” of its members.

    Communities have to be open, they have to have some source of new material coming in, whether its raw material, resources, ideas, etc., and then they have to have some place where they can send their creative product, the things that they make, the ideas that they have.

    Openness is essential to community. But again, with this traditional kind of ‘group identity’ kind of description of culture, there’s this inclination or desire to close off a culture from the rest of the world. When you do that, you become like North Korea. You become isolated, you become unable to cope with even your own internal society.

    A third criterion that distinguishes a community defines as a network from a community defined as a group is autonomy. And what that means is that each of the members of that community are working toward their own sense of values, their own sense of purpose, their own goals or endeavours. Now that does not mean there cannot be a common purpose. People can choose to work together. What it does mean is that a common purpose or a common goal does not define the community.

    And, you know, we have various people from corporations here, one of the first things corporations like to do is create a “vision statement” and then a strategic plan, etc., that everybody in the corporation will line up behind. But this is actually kind of an artificial reality, a fantasyland, because if we think about any corporation that we know of, all of the people at the

    corporation are working at the corporation for different reasons. They’re not all lined up behind the vision statement and strategic plan. I don’t wake up in the morning thinking “I want to increase the effectiveness and foster productivity of the customer base.” Nobody does that. It’s like, “I need to get some milk.” And that’s why I got up today. And to get some milk I have to go work, I won’t get milk, I won’t get paid, otherwise. So, it’s better to recognize that, better to understand that even a cohesive organization like a corporation is not united behind a single goal, but rather, is a collection of autonomous but cooperating individuals.

    And that’s the key. We talk, we hear a lot about collaboration. But I like to distinguish collaboration from cooperation. Cooperation is an exchange of mutual value between autonomous individuals, rather than collaboration, where the two individuals subsume themselves under a single common goal. Now, again, there is room for voluntary collaboration. I’m not eliminating it out the scope of the world. But collaboration, working together, does not define community. Community is based on each of us following our own way. As Star Trek would say it, “Infinite diversity in infinite combinations.” Or as John Stuart Mill would say, “Each person pursuing his or her own good in his or her own way.”

    Finally, the fourth major dimension distinguishing a network-based community from a group- based community is interactivity. And this is a bit tricky. But this is a concept that’s going to underlie a lot of the rest of this talk. So I want to draw it out a little bit.

    Now, there are different ways we can think of the way ideas are created and spread through a community. One way is what we might call the ‘broadcast method’, and here’s how it works: I have an idea, say, “the Earth is square” (it doesn’t have to be a true idea, it can be any idea) and what I say is “the Earth is square” and then I say it to you, and so you in the front row, you all say it, and you agree, “the Earth is square”, and you say it, and you pass it on, and soon, everybody in the room is saying “the Earth is square.” So what is happening here is that the idea of one individual has been propagated through the group and has become part of the group’s knowledge.

    Now, you can see the limitation of that. It means that we, as a group, cannot know more than any individual can know. Right? Because the idea has to be formed in my head, in my mind first, or maybe in one of yours (it depends on how democratic we are), but still, it’s the same kind of thing, it’s formed in my head and then spread to each of you. In philosophy we would call this as well a ‘fallacy of composition’, making the properties of the group the same as the properties of the individual. The idea (had by) the group would be the idea (had by) one of the individuals.

    This is how groups operate. This is how systems based on identity and conformity operate. But they have this upper structural limitation.

    Now, imagine we wanted to have a really complex idea. A really complex kind of knowledge. Well, we can’t do it the way I’ve just described because it’s the sort of knowledge that’s going to go beyond what I could have from my one perspective or my one point of view. It goes beyond what can be known by one person. I’ll give an example: flying an airplane from Toronto to Buenos Aires. You might thing, “Oh yeah sure, that’s no problem, anybody could do that, right?” Well, let’s make it a big airplane. Let’s make it a 747. That’s a big airplane.

    Think of that concept. Think of the knowledge it takes to do that. Could that knowledge be contained by any one individual? Clearly not. What we need to do is to create a mechanism where the knowledge is not contained by any one person but rather as they say ’emerges’ from the interactions among the people. So we’ll all have our own bit of knowledge, or own perspective, on how to fly this airplane. Somebody is the pilot, somebody is the co-pilot, somebody is the navigator, somebody is the flight attendant, somebody makes the tires the plane lands on, somebody builds the windscreen, somebody washes the windscreen, somebody serves vodka to the first class passengers, water to the rest, and somebody sits in the control tower and manages the plane traffic, and there’s the guy who, when the plane’s coming in, waves his hands and says “park here!” It takes all of these different perspectives, different bits of knowledge, to create the social knowledge of how to fly this airplane.

    There’s a wide variety of knowledge like that. And it’s important to understand that it’s not simply a joining of individual knowledge, it’s rather new knowledge that did not exist and could not exist from many individual perspective. It’s, as I said, emergent knowledge. I’ll talk a little bit about that as the talk progresses. But essentially the idea of emergent knowledge is that it is the pattern that is created by a set of interconnected entities. So, this knowledge, because it is a pattern, does not exist ‘in itself’, it has to be recognized by a viewer.

    Yesterday I was talking about how ‘knowing’ is ‘recognizing’. This is what this is. Look at, for example, your television. Think about watching a program on your television. And you see a picture of Richard Nixon on your television. You all know who Richard Nixon is, right? Oh, it doesn’t matter. Well – it does matter. Really, what you’re looking at is a whole bunch of pixels. Little dots on the screen. The reason why you recognize a picture of Richard Nixon is because they’re organized in a certain way. No individual pixel has a picture of Richard Nixon, in fact, it couldn’t. Couldn’t possibly. Because a pixel can only have three or four values of colour. And what is important is not only the organization, but you need to recognize that, yes, this is in fact Richard Nixon. If you did not know who Richard Nixon was then when you look at the pixels it’s just some guy who looks a little nervous. And if you were an alien from outer space you might not even recognize that this is a human. So, this is the sort of thing. The image of Richard Nixon ’emerges’ from the pixels. Our knowledge of how to fly a 747 ’emerges’ from individual knowledge. But this knowledge does not exist in itself; it has to be recognized as existing.

    So that’s what I mean by interactivity. When I say ‘interactivity’ I say the knowledge in the community is created by the interaction of the members of the community rather than created in one person and then spread through the community.

     

    Three. Two Kinds of Knowledge

    Now, the theory of connectivist teaching and learning is based on two ideas. First of all, the idea that the human brain is a network, just like the networks I’ve described. It’s a whole bunch of individual entities, neurons, connected with each other. And knowledge in the human brain emerges from these connections. And then the second idea is that our communities, our sociality, our culturality, again is created through these connections. So the way we have cultural identity, the way we have language and science and research and reasoning in our society generally is the same mechanism as the way we have knowledge individually or personally.

    It’s the same system, we’re using the same system, the same logic, the same sort of idea. And when you think about it, that has to be the case. Think about a neuron, just for a second.

    Neurons are really stupid. All a neuron can do is receive an electrochemical potential and then decide whether or not it’s going to fire, and, well, that’s the life of a neuron. It doesn’t know why it’s firing, it has no idea about the world, or even the neurons next to it, all it knows is that if it gets enough signals coming in, it’s going to fire. And, over time, depending on the signals that come in, it might form connections with other neurons. But it just does that because of physics and chemistry, it doesn’t ‘want’ to form connections, it just does form connections, it’s just a matter of biology, that’s it.

    So, neurons are really stupid. If all you could think of was what one of your neurons could contain, you could not even get out of bed in the morning. You’d be lying there in bed, “dut-dut- dut-dut… dut-dut-dut-dut.” Because that’s all a neuron can do. So our knowledge, our intelligence, must be based on something emergent from the connective activity of many individual neurons, can’t be based on the content of a neuron, has to be based on the pattern of connectivity of these neurons.

    We replicate that in connectivist teaching. We form a network in which individuals act as though they were neurons. And what they are trying to do in this network is to receive signals, process signals, send signals, and connect with other people. Very simple activities. That’s the key.

    Because… well, anyhow… (it’s really easy to get off track in one of these talks because there’s always a reason, and I can follow the train of reasoning, and we go back to, like, you know, anyhow, “first there’s a hydrogen atom”, and you know, anyhow.) [here I’m thinking of Minsky]

    Here’s what a connectivist course looks like. It’s also what a personal learning environment looks like. A personal learning environment is the environment you would use to learn in a connectivist manner. Here’s you, at the centre, it’s represented in this case by Elgg because the people from Elgg created this diagram, it’s a version of the diagram created by Scott Wilson, and then you at the centre are connected to all kinds of different things, different people, different applications online. You’re connected to social networking, you’re connected to communities, you’re connected to your own files, to search, to weblogging, and so on. All kinds of different services.

    You should think of it as you being connected to other people. And everything else is just physics and biology, mechanisms for you to send messages to other people and other people to send messages to you. It’s a big communications network. That’s what a connectivist course is.

    Now, remember how I distinguished between groups and networks. There is a corresponding difference between two types of knowledge. There is the traditional type of knowledge that people have always tried to teach you in a classroom, and then there is this new, network type of knowledge that I have been talking about.

    Think about the kind of knowledge that even today people think you should be learning in schools. It’s static. They want to teach people basic first principles, foundational knowledge, stuff that doesn’t change. “Two plus two equals four.” Make that the knowledge that you learn. It’s declarative. It’s kind of a hard concept, but what I mean is, it is a set of propositions. Or, a set of statements of fact. So, the idea here of this kind of knowledge is that, when you learn, learning is the accumulation or collecting of a set of facts. As Wittegenstein in his earlier, incorrect, days, said, “The world is a totality of facts.” And the educational version would be, “an education is a totality of facts.” That’s the old way of looking at education.

    And then it’s authority based. The idea is, the contents of one person’s mind will be sent to other people. “I know that the world is a square, and I will teach you and you and you that ‘the world is a square’.” That’s the old way of looking at knowledge. That’s the way of looking at knowledge where the idea of teaching is to make everybody the same. But that’s not productive of community. In fact, it’s destructive of community. It makes communities stagnate and die rather than grow and prosper.

    So, I’m more interested in another kind of knowledge, which is ‘knowledge in the network’. Knowledge in the network has, well, many properties, but these three properties are pretty key.

    First of all, it’s dynamic. What we mean by that is, it’s always changing. What was true yesterday may not be true tomorrow, what was true tomorrow may not be true the next day. Even if there are underlying facts of the matter, even if two plus two is always equal to four, it’s not always relevant. It might not always matter. There are always facts out there in the world, the world may not be a totality of facts, but it has a lot of them, but these facts come in and out of relevance, come in and out of prominence. So even if there are unchanging parts of the world, our relationship to these things changes.

    Knowledge is also what we call tacit. Or non-declarative. The idea of tacit knowledge is an idea from Michael Polanyi, and a way of describing it is, ‘ineffable’. It’s knowledge that quite literally cannot be represented in words. Because we do not have sentences in the brain. We may think in sentences, or we may think in the experience of reading or hearing sentences, but remember, we just agreed earlier, we have neurons in the brain, and that our knowledge is composed of connections between these neurons. These connections are not sentences. These connections are distributed across tens of thousands, millions, of neurons. And the kind of knowledge that we can have is much more complex, much more multi-faceted, much more – ah, I’m searching for a word – much more whatever than we can express in a sentence. (Voice: “Kaleidoscopic.”) Kaleidoscopic. Yeah. That’s a good word.

    Think of – what was the example I had yesterday? – you can sit there all day and talk about how to do something and … well, there’s all kinds of examples. ‘Riding a bicycle’ is an example Polanyi used. I can sit here all day and describe to you how to ride a bicycle, but really, riding a bicycle is something you have to learn by getting on the bicycle and riding it. Because the language that I use to describe how to ride a bicycle does not completely describe how to ride a bicycle.

    And then, finally, knowledge in the network is constructed. I should probably not say ‘constructed’ so much as ‘grown’. Knowledge in the network results from the entities in the network interacting with each other and new connections being formed. In the human mind, knowledge is created when we create connections between neurons, one neuron to another. Socially, social knowledge, that is created when we connect individuals, one individual to another individual.

     

    Four. Learning is not Social

    So here we are, here’s our personal learning environment, each one of these is one of these. So this – these things – are connected to one another in this network. We can also think of each one of these as a different person or we can also thing of each one of these as ways that you yourself are connecting to the network. It’s complex, right? It’s not simple.

    But the main thing we have here is we have individuals that are connected to each other through a variety of technologies. That’s the main thing to think of here. So, what happens when we think about teaching and learning in this sort of network is that the role of teachers and learners is to participate in the workings of this network. That is the activity that we are undertaking. And one of the things you should notice is almost in the way I’m describing this, the activity of teachers becomes the same as the activity of students. They’re not two separate activities. They merge, they become one and the same activity.

    OK, now, so I’ve mapped out this discussion. You’ve probably seen elements of this discussion in a lot of the other stuff that you’ve seen or read. The next step that most people take, including I might add George Siemens and others, is to say, “OK, we’ve described networks, we’ve described social networks, all learning is social, therefore we’re going to talk about social learning. And so our theory of learning will be a description of learning in social networks.”

    Now this is a turn that I do not take. This is a way that my approach to learning in networks is different or distinct from other people’s. Where other people go, at this point, is one of a number of different directions, and I’ve listed a few here:

    • There’s the old behaviourist or instructivist method of transmitting knowledge, Skinner and all of those guys, Gilbert Ryle ‘The Concept of Mind’, the whole host of them where learning is behaving. There was a comment in my blog just the other day, it begins, “Well we can agree that we know someone has learned when they are behaving a certain way,” and I thought, “Oh yeah right, here we go again.”
    • Other people have become more advanced, and we have Moore, for example, M.G. Moore bases his theory of learning very much on information theory. It’s interesting, my background is in philosophy and science, not educational theory, so I approach Moore late in life. I read Moore, and my reaction was, “this is a checksum network.” This is sending a signal, sending back a confirmation, and so on. Moore’s theory is actually based on a protocol for information and communication. It’s pretty useful as far as that goes, but it’s an externalist-based social learning kind of theory.
    • Then you get more complex, more interesting forms of social learning, social constructivism from Vygotsky and others, problem-based learning [here I am thinking of David Jonassen] , and I could go on. These are all the fairly standard theories of learning that everybody learns when they study education or learn about pedagogy.

    I do not go in this direction.

    Let’s think about this ‘social learning’. Think about where the models of knowledge and learning are coming from. Well, we have externally based definitions or community-based definitions.

    Learning objectives will be defined by the community. What counts as a body of knowledge will be defined by the community. The processes are externally-based. The processes of learning are going to be defined by the community. They’re all ‘activities’, ‘conversations’, ‘interactions’, ‘communications’. Everything’s happening external to the person. We have external systems. We define learning in terms of classes. Even in terms of networks, groups, collaborations. All kinds of things that are happening outside the individual person. And, of course, evaluation is by somebody external to you, an examiner or something like that.

    It’s as though the entire process of learning happens in the society, in the community, and nothing happens in your head. And that just seems wrong to me. Because throughout this talk I’ve been careful to distinguish between the knowledge in our head, that is formed by connections of neurons, and the knowledge in society, that is formed by connections of people. And the learning that happens in our head does not consist of connections between people. The connections between people is the learning society or a community as a whole does.

    The learning that we do as individuals is different from that. It is the growing of connections in our own mind. Very different. Social knowledge – the knowledge of a society, the knowledge of a culture, the knowledge of a community – is not the same as personal knowledge. Social knowledge emerged from the connections created by many individual persons. Two distinct things. They’re related. They’re connected. But they’re not the same.

    So we have different kinds of learning, different kinds of knowledge management. ‘Personal knowledge management’ is ‘learning’, ‘social knowledge management’ might be ‘research’ or ‘social learning’ or something like that.

    So the key thing that I want to underline here, that makes my approach to education a bit distinct from other people’s is that the product of the educational system is not a social outcome. And it’s interesting because when you think about how people define what the objectives of an educational system ought to be, they are so often social and cultural objectives. “We want everyone to know our underlying social values, we want everyone to know mathematics, we want everyone to be able to take part in the creation of a jumbo jet that flies from New York to Buenos Aires.” They’re all socially defined.

    But learning is in fact a personal outcome, not a social outcome. It (defining learning as a social outcome) is like having the picture of Richard Nixon tell the pixels what they ought to be.

    Five. Learning a Discipline

    So how does this work? Here’s the picture of connectivism my way (I think George (Siemens) is still working on this – I don’t think he disagrees but I don’t know if he agrees).

    So, what have we got? We have a social network. We have a personal learning environment. We’re all connected. We’re having communications back and forth. We’re receiving content, signals, input, we’re manipulating it, we’re creating it, we’re sending messages to all the other people who are our friends, we’re working in the social network.

    But we’re using the social network to create in ourselves a neural network. It’s just like exercising with a bar-bell. [Picking up an object] Pretend this is a bar-bell, so I’m exercising – I don’t want to become one of these (bar-bells), that would be ridiculous. But I use this in order to develop the muscle.

    So I use something external to myself in order to develop an internal capacity. There needs to be some kind of consistency – a barbell has to be something I can lift and hold. The external properties – the gravity, the weight of the barbell, are important to the formation of the muscle. But they are not the same (as the muscle).

    Personal knowledge consists of neural connections, not social connections. Very important. The reason why this is important is because when we understand personal knowledge as neural connections, then personal knowledge does not consist of the artifacts that we use to describe social knowledge. The artifact that we use to describe social knowledge might be ‘a sentence’, “Paris is the capital of France.” But personally, in our own mind, in our own neural network, it might look like that (see diagram). And this is not simply the representation “Paris is the capital of France,” it’s also the representation “Cows are brown.” And it’s also the representation, “water is wet.”

    What’s happening here, there are two things happening here. First, this is non-propositional, non-sentential, non-explicit. It’s tacit. It’s ineffable. There are no words here. Second, it is distributed. There’s no particular place that corresponds to the knowledge that ‘Paris is the capital of France.’ And this knowledge is embedded with other knowledge that semantically is not even related to it.

    It results in funny things. It means that, if I tell you, “My dog is white,” your understanding of Paris changes a little bit. You wonder, “how does that make any sense,” well, your understanding of ‘dog’ is in here, and your understanding of ‘Paris’ is in here, and if I change your understanding of dog, I change your understanding of Paris a bit.

    That’s what we mean by ‘complex’. You can’t do just the simple cause-effect kind of thing. When you have complex knowledge you have situations like this where you can’t just get at one element of knowledge in isolation from all the rest (This makes a total mash – as an aside – a total mash of traditional education research).

    It’s the difference between ‘knowing that’ “Paris is the capital of France” or even ‘knowing how’ to do something, and what it feels like to know the capital of France. When you think about that –

    you all know the capital of France, right? (Well you must, I’ve told it to you three or four times already; if you’ve forgotten it, now you’re in deep trouble.) But there’s a difference between ‘knowing’ this, as a fact, and knowing what it feels like to get the answer right when somebody asks you, “what is the capital of France?” And it’s this feeling, this overall, this full-mind kind of sensation, that is the actual knowledge. The sentence “Paris is the capital of France” is just the social artifact that we produce. It’s public knowledge, it’s social knowledge, but it’s not personal knowledge.

    Learning a discipline, like geography, or psychology, or any of these things, is a total state. It’s a transformation of the self from somebody who was not a geographer to somebody who is a geographer. It’s not a collection of individual bits of knowledge, it’s a process of becoming something. We grow our internal state in such a way that at the end of that growth we’re able to say, “I’m a philosopher.” Or a geographer. Or whatever.

    The question is, how do we know? Typically, we give people tests. We ask them for social artifacts. We ask them for very simple propositional social artifacts. Write a sentence. Answer a question. Create an essay. (Solve a problem.) Real simple. And there’s all kinds of ways for a person to correctly produce the social artifact without actually having become the thing that we’re trying to become. People who are good on tests (can be) bad at a profession; I’m sure you’ve seen that.

    Learning to become a geographer (or a philosopher, or whatever) occurs not by presenting people with a set of facts but by immersing themselves in the discipline. By joining the community of geographers or the community of philosophers and taking part in the wide range of interactions typical of that community. It’s like learning a language. I can sit there and describe all the elements of learning Spanish but really to learn the language you need to get into the language community and actually speak the language. And that’s how you learn the nuances of the language, the subtleties of pronunciation, the appropriateness of certain words at certain times, and so on.

    It’s expressed functionally, rather than cognitively. “Can you act as a geographer in a network of geographers?” Or you can start to ask the question, “If you were to stand there in a group of physicists and talk about physics to them, would you stand out as someone who didn’t know what they were talking about? Or would they all accept you as a physicist?” It’s that kind of thing. It’s not, “Can you state a whole bunch of facts related to geography?” Or it’s like teaching. “If you stood at the front of a room and started teaching, and other people who were teachers were watching you, would they accept that you knew what you were doing, or would they say, ‘this person is not a teacher, they have no idea what they’re doing?'”

    You see how the community recognizes whether or not a person is a teacher, whether or not a person is a geographer. It’s like seeing that pattern, that complex pattern of associated behaviours, actions, reactions, inclinations and all of the rest. They are recognized as being such-and-such a kind of person.

    That’s why, when it really becomes important that somebody know what they do, we don’t just give them a test. Airline pilots, right? We don’t give them a true-false quiz, and then let them fly an airplane. We don’t let them write an essay answer, and then fly an airplane. We put them in a simulator. We immerse them, and we actually put them in a real airplane, without passengers, and see if they can fly the airplane. Similarly with doctors. We don’t just take a doctor straight from a test to the operating room. They have to go through a long internship where other doctors look at them, watch all the different things that they do, and recognize, “Yes, this person is a doctor.”

    In other words, we evaluate whether a person has developed the appropriate neural network, the appropriate personal knowledge, by their performance overall in a community, in a network. (Only a network can evaluate a network!) It’s not the specific bits of performance – and this is one of the reasons why I’m worried about competencies, I’m worried about breaking things down into more and more precisely defined disciplines – it’s not the little bits of knowledge that we might have, it’s how they function when immersed into the environment.

    How do you know that a person can swim? They can tell you everything they know about swimming, but you don’t know that they can swim until you put them in the water and see whether they sink or not. That’s the deciding factor. And when you put them in the water, anybody can tell, right? “Oh yeah, that person’s swimming.” Or, “oh yeah, that person’s drowning.” Doesn’t matter whether they passed the test.

    Personal knowledge is not social knowledge. It does not consist of social artifacts. It is not constructed the way we construct a sentence. It is not built the way we build a house. It is not organized the way we organize a society. It is grown the way we grow a muscle. And so the method, the activity of learning, is appropriate to that kind of knowledge.

    Standing there, even like I’m doing here, and spouting a bunch of facts at people doesn’t produce personal knowledge. It might produce little bits of social artifacts that they may or may not remember, but it’s not going to produce the knowledge.

    The very best you can do is to induce, or stimulate, some kind of thinking. I can’t take a sentence and put it in your head. Not possible. Even though I may look like I’m really trying to do that right now, and you notice how I’m even leaning forward, and trying to push it into your head, it’s not happening. You’re all reacting in your own individual and unique way – some of you are smiling, some of you are laughing, some of you are at the back are shaking your head, you know, that’s OK – and I can’t put my knowledge in you, all I can do is give you various stimulations that get you thinking.

    But it’s just a part of your overall social experience, and it’s your overall social experience that will produce the knowledge. What we’re doing here is a part of the practice of a discipline as a whole, and by participating in that discipline we’re becoming a little bit more able to work in that discipline.

    So, the way to think of a personal learning environment, in this context, is as an exercise machine, a way to immerse yourself in a community and work with the community, to get yourself into the community and practising with the community. If you wanted to become a geographer you would use your personal learning environment and connect to the community of

    geographers. You wouldn’t sign up for a geography class – well, you might, but that wouldn’t be your education – your education would be to start listening to and watching geographers, and then sort of tentatively at first, and then more and more, to practise doing the things that geographers do.

    You could learn philosophy in the same way, and in fact, this is what we do in philosophy, that’s why philosophy is so cool, in philosophy nobody spouts a bunch of facts at you and expects anyone to believe it. The whole act of philosophy is doubting what you’re told. But what happens is that we as philosophers immerse ourselves in this environment and put ourselves into proximity with other philosophers and argue back and forth like crazy, usually over copious quantities of beer (because that’s what philosophers do, which is why their lives are so short) and gradually, gradually, you become more and more capable of being a philosopher, more and more adept at the practices, the method of speaking, the language, the jargon, the world view, the way of thinking, all these things, of the community of philosophers.

    And not only that, you will actually become through that process an empiricist, or a rationalist, or a realist. You’ll actually become affiliated with a subdomain and become a part of a community, become a part of a culture or a society within philosophy by gradually developing a greater affinity for that group, rather than becoming more and more like members of that group. And it’s not like they told you “You will be a realist.” It’s you determining your own path, your own direction, first, to become a philosopher, and then, to become a realist (which I would never do).

     

    Six. The connectivist Course

    Developing personal knowledge is like exercising. Much more like exercising that inputting, absorbing, remembering. Your personal growth, your exercise, develops as a consequence of interactions with the rest of the community. And so we have the connectivist course.

    Quote-unquote. It’s not really a course. In a connectivist course, like the course that George and I taught, Connectivism and Connective Knowledge, or like the course that I’m starting in June called Critical Literacies, we don’t ‘teach’ information. We don’t ‘have content’ that we want to pass along. Rather the ‘content’ of the course is created by the members of the course themselves.

    That includes the instructors – it’s not like constructivism where the instructor is the ‘guide by the side’. None of that. It doesn’t work unless the instructors take part too, because if you think about how community works, in a community, like a community of geographers, the alpha geographers are in there slugging it out, and arguing geography back and forth with other geographers, “I can draw better maps than you can, here look…” and then all the other geographers that are less alpha and are learning to be geographers, they’re in the same community.

    The instructors and the students are in the same place. They’re creating the same content, they’re working with the same content. In Critical Literacies, I’ll be in there creating content, and I’m sure people will read the content I create, but students will be creating content as well. And

    together, through our individual actions, we create this community, this society or this culture, around the idea of critical literacy.

    We don’t tell students to perform specific tasks. Rather, they are presented with the things that other people create and they do with them whatever they think is relevant.

    Now, what will they do? Now that depends on their experience in the course. It depends on who they’re watching, who they’re following, who they’re using as models or mentors, who they’re trying to imitate, or whether they’re trying to imitate at all, the idea is that they in their own unique way are working with the content and materials that constitute that discipline.

    What will happen a lot of the time, most of the time if you’re lucky, is, they will change the discipline. They will add something unique to the discipline that we’ve never seen before. And the discipline will grow and develop.

    You know, a lot of people in different disciplines have their greatest ideas, their unique contributions to a discipline, they originally formed when they were students. All this network stuff that I do originally formed in my own mind when I was a student. I’ve pursued it ever since. It was as a student, instead of following the typical cognitivist “we have sentences in our brain” line, I went a different direction because I felt that was more appropriate, working with philosophers and educators gradually building my own perspective on the matter.

    Typically in a connectivist course a student will do some sort of common activities: reading, posting comments, creating blogging, contributing content to a wiki. They don’t have to do this, but empirically, what we observe is that a large number of them do do this. But also, what we have observed empirically through different offerings of connectivist-style courses, is that students will engage in a large and unpredictable set of activities. They may create a map of course participants, they may host a seminar in second life. In the Connectivism course we had three separate Spanish speaking subgroups formed in the course, including the ‘Connectivitas’, which I thought was kind of cool. And there was Second Life stuff. People created Google Groups, Yahoo Groups, translated stuff, created concept maps, and more.

    This is the idea, when you’re not located in any particular place, you’re not set in any particular environment, when all you have to do to participate in a course is to connect to us, you can use any application, any location, any forum, any way of communicating or working with ideas, and it connects back to the whole, and it adds a uniqueness, your unique perspective, to the whole.

    A connectivist course – very important – is not a bunch of people marching lockstep through the same activities. ‘Learning Design’ is anathema to connectivism. It’s not everybody doing the same thing. In a connectivist course, everybody does different things.

    And there even isn’t a sense of ‘everybody’. Sometimes people are in the course, sometimes they’re not. They may start the course and not finish, they may finish the course without having started it. It doesn’t matter.

    It’s understood, expected, that students will undertake different activities, of different kinds, that their learning will emerge as a product of these activities, and just as importantly, our social understanding of the subject matter will also grow. Again, it’s not a static set of course content. Our knowledge of the course content grows every time we offer the course.

    So, a connectivist course basically has two major modes:

    First of all, the creation of an environment. This is the personal learning environment, an environment that supports or fosters great diversity and autonomy in participation, an environment that is also open – very important, you cannot close a barrier to a connective course, it has to open so people and ideas can flow in and out. And it’s based on the idea of interactions between people, and a large and undefinable body of materials.

    And then, secondly, in this environment, people do their own things, create interactions with each other, and new and unexpected – typically unexpected – knowledge flows outward as a result.

    And so that is the talk that I wanted to offer you today and I think we’ll have time for comments and questions, and I hope you and the people who were online and on UStream enjoyed that. But we’ll see, we’ll see if there’s anyone left. 161, yeah.

    Footnotes

    114 Stephen Downes. Connectivism and Transculturality. Stephen’s Weblog (presentations). Spanish translation available. Slides and audio available. May 7, 2010. http://www.downes.ca/presentation/251

    Buenos Aires, April 20, 2010

     

    Theoretical Synergies

    A reader asked:

    I read an article that was published in March 2011 edition of IRRODL: “Proposing an Integral Research Framework for Connectivism: Utilising Theoretical Synergies”, from B. Boitshwarelo (Botswana).115

    I found it very interesting. However, certain questions have arisen in regard to the analysis done by the author.

    1. Accordingly to Activity Theory (AT), learning is initiated by intention (p168): “learning as conscious processing, a transformational process that results from the reciprocal feedback between consciousness and activity”. Is this true in connectivism?

      Connectivism says that learning is the ability to construct and traverse networks. Sometimes, this process may not be intentional. I mean, sometimes we learn without being aware that this is happening (as a child, for example). Does connectivism contradict AT?

      Does connectivism contradict AT?

      No doubt different people have their own theories, but I have argued in the past that one of the major differences between connectivism and constructivist theories generally is that in connectivism learning is a property of the system, something that happens all the time, and is not therefore the subject of intentional activity. You don’t decide to learn now, and maybe to not learn later, you are learning all the time, it’s what the brain does, and the only choice you exert over the process is what you will do to affect the experiences leading to your learning. Watch TV all day and you’ll learn about game shows and daytime dramas, practice medicine and you’ll learn to be a doctor. Similarly, where constructivists say “you make meaning”, I disagree with the expression, because the production (so-called) of meaning is organic, and not intentional.

    2. (p 169) The author says that one feature of connectivism is that it recognizes the need to adapt to the ever-changing nature of information “in order to resolve the disharmony introduce by such change”. My point is: does connectivism talk about this? Does connectivism aims to resolve these contradictions or is about to accept and learn to live with them? Are connectivist systems stable?

      Does connectivism aims to resolve these contradictions

      There are of course no contradictions in nature. A contradiction is a linguistic artifact, the result of sentences believed to be true each entailing that the other is false. Because so much of cognition is non-linguistic, it is probably not useful to speak of contradiction in this context, but rather to speak of harmony and disruption. (I say this almost off-the-cuff, but this would really be a significant change in our understanding of logic and reason).

      All connectionist systems – i.e., all networks, as understood computationally – work through a process of ‘settling’ into a harmonious state. What counts as harmonious varies depending on the precise theory being implemented. For example:

      • Hebbian associationist systems settle naturally into a state where neurons or entities with similar activation states become connected
      • Back-propagation systems adjust according to feedback
      • Boltzmann systems settle into a stable state as defined by thermodynamic principles

      The ‘disharmony caused by change’ is best thought of as a new input that disrupts this settling process. The network responds to this change by reconfiguring the connections between entities as a result of this input. This is learning.

      Whether we are able to address linguistic artifacts, such as contradiction, with a given learning experience, is open to question. There is no reason to expect a contradiction to be resolved, though were our linguistic artifacts based in experience, such a resolution would be a desirable, and expected, outcome.

    3. (pp.171-172) Is it really necessary to use the theoretical concepts of other more consensual and tested theories to study and validate connectivism? Does connectivism have his own tools of analysis to do this? Does connectivism need to be feed by constructs of other theories? Doesn’t this contradicts connectivism as new approach to learning in the digital age?

    Does connectivism need to be feed by constructs of other theories?

    I think it’s important to understand that connectivism is the adaptation of educational theory to these other theories, that it points to a theme underlying these other theories, and is not distinct from these theories.

    Connectivism is, in my mind, a particular instance of a much broader theory of networks. Thus, evidence that informs us about the theory of networks generally also informs us about connectivism.

    This is an important point. Constructive approaches to education (and most other things) place a special significance on the role of theory, and particularly the role that theory plays in providing a perspective or ‘lens’ through which a phenomenon is experienced. Hence we expect any given theory to provide a given ‘stance’, provide analytical ‘tools’, and beyond certain constraints (such as non-contradiction) no one theory is assumed to constitute a privileged stance. Theory-construction thus becomes an importance scientific and pedagogical activity, leading to a host of other constructs (such as, say, ‘identity’).

    Connectivist learning is very different. It is not about creating cognitive constructs such as theories. Learning, according to connectivism, is a process of growth and development or

    networks rather than a process of acquisition and creation of concepts. Networks are not concepts. Concepts are representational systems, they postulate a divide between what they are and what they represent, they therefor entail a theory of signs, or semiotics, and have linguistic properties (such as the law of non-contradiction). Networks are physical systems, not cognitive systems. Though they can be depicted as representing things (e.g., a brain state may be thought of as representing a physical state), this depiction is in itself an interpretation, and not a property of the network itself.

    Now I think that network theory in general and connectivism in particular can provide a set of tools to analyze *other* phenomena – I describe these as six elements of critical literacies, but the exact nature is unimportant here – but it is rather akin to the way mathematics offers us tools for the evaluation of other phenomena – mathematics can define data and instrumentation, such as measurement, ratio and comparison, and bookkeeping – but it would not be reasonable to turn these phenomena around as a means of evaluative mathematics.

    Networks, in other words, are what they are. Network theory is nothing more or less than a description of networks, and the application of that description to other phenomena, just as qualitative theory is a description of properties (such as colour, size, shape, position, relation) and quantitative theory is a description of number and ratios.

    keith.hamon said…

    Stephen, thanks for the fine clarification of the role of networks in thinking about education, knowledge, and so forth. You are correct to point out that Connectivism is a subset of the broader discussion about networks; however, I fail to follow your thinking on a couple of points.

    First, you say that there are “no contradictions in nature. A contradiction is a linguistic artifact,” which might suggest to some a divide between nature and its constructs and human constructs. I’m uncomfortable with this distinction, though I know it is the dominant view. I see it useful to consider linguistic structures as a part of nature: a complex layer of the total network emerging from a physical substrate, and perhaps having different rules than that substrate, but still dependent on that substrate. It seems to me best to think of language in all its variations as a part of the natural mix, part of what the Universe has created. Isn’t it more useful to think that language and linguistic structures—even contradictions—have naturally emerged in nature, and are therefore natural?

    This leads to the second statement that I don’t quite follow: “Networks are physical systems, not cognitive systems.” But aren’t cognitive systems also usefully thought of in terms of networks? And aren’t cognitive systems based on physical substrates, physical networks? Isn’t it more useful to think of cognitive structures as just a different scale of the natural network? I agree here with Olaf Sporns that “cognition is a network phenomenon.”

    Again, I think I’m having trouble with what I sense in your statements as a separation between the physical and natural and the cognitive. Is this distinction necessary? Or have I missed the point of your argument? Thanks.

    Downes said…

    • I’m uncomfortable with this distinction, though I know it is the dominant view. I see it useful to consider linguistic structures as a part of nature

    That’s very much a minority view, and I think not one you can maintain consistently (or course, that may not be a problem foryou).

    The thing is, the contradiction is not a part of the physicality of the expression. It arises only as a result of the interpretation we place on the symbol system, as a result of how we apply truth, meaning, and other abstract properties to the expression.

    In the physical world itself (at least, according to the way we use words as they normally mean) it is not possible for something to be both P and not P. It can’t be both a dog and not a dog. Yes, you can alter your linguistic system to allow contradiction – that’s what your stance does – but you cannot successfully incorporate that stance into the physical world. That’s a fallacy I call “the linguistic pull” – the belief that physical systems are governed by non-physical laws.

    • It seems to me best to think of language in all its variations as a part of the natural mix, part of what the Universe has created.

    But you recognize that it would be absurd to say “there are pink dragons,” right? because someone imagined them to exist, believed them to exist, or uttered the statement “pink dragons exist.”

    The multiplicity of linguistic systems does not entail a multiplicity of physical systems.

    • But aren’t cognitive systems also usefully thought of in terms of networks?

    This is a variation of what Dennett would call ‘the intentional stance’. But it is also an example of what Paul Churchland would call ‘folk psychology’.

    Here’s the dilemma:

    If we use the word ‘dog’ in a relatively ordinary and unambiguous way, we can relatively easily create a mapping such that statements about ‘dogs’ are statements about a definable set of physical objects [dogs], such that what is true of ‘dogs’ is also true of [dogs].

    For the intentional stance, or folk psychology, to be successful, then we need also to be also to do the same thing with common abstract concepts.

    The problem is made most clear with a concept like belief. We can all use the word ‘beliefs’ and have some sense of what it means. But there is no set of objects [beliefs] such that we can map from ‘beliefs’ to [beliefs].

    The same sort of problem exists with logico-linguistic terms such as ‘truth’ and ‘meaning’. Again, there is no mapping from ‘truth’ to [truth] (the best attempt is Tarski’s theory, which would be best represented here as “‘snow is white’ is true iff [snow is white] is ‘true’.”)

    We know that there are human brains, and that they do things like think and evaluate and intend. But the terminology we use to describe what human brains is terribly imprecise. That’s fine, so long as we don’t do what you do here – so long as we don’t infer from properties of the symbol system to properties of the physical system. The ‘physical symbol system’ hypothesis, in other words, is false.

    So while cognition is indeed a network phenomenon, it is not governed by the principles and rules we have to this point characterized as cognitive phenomena.

    Glen said…

    It’s an interesting post, thanks. If connectivism is the adaptation of something Educational, shouldn’t there be more of a focus on maximizing learning and intentional learning, rather than just learning itself?

    This seems to me a very big gap in connectivism. We can’t simply connect, we have to connect in some way…be it through language or other representational systems. Although these systems are surely not the same thing as what they represent, they play an inseparable part in what learning will take place. Connectivism may be a description of potential learning, but the quality of actual learning needs too.

    Downes said…

    • We can’t simply connect, we have to connect in some way…be it through language or other representational systems.

    If I bonk you in the head with a thrown apple, we’ve connected – even though no language or representational system was used.

    This is what’s important about connectivism (and network approaches generally) – the connection itself, rather than any putative ‘content’ of that connection, is what’s important.

    Glen said…

    If you bonk me in the head with an apple, you’ve connected by throwing an apple at me. That’s much different than connecting by throwing a wrench, or connecting by making a phone call.

    When you say “the connection itself”, do you mean to say “connecting itself”? As I read most of it, you’re not concerned with the connection itself.

    I would say both are potentially equally important in connectivism (pipe and content), because it is applied to a field. Throwing a wrench at me, compared to a Nerf football is going to change whatever your intended message is in connecting with me, regardless of where that message originates.

    Downes said…

    • That’s much different than connecting by throwing a wrench

    Yes it is. But it does not follow that there is a representational difference, or that the difference constitutes a representation.

    • “the connection itself”, do you mean to say “connecting itself”?

    No. Connecting is the act of forming a connection. A connection is the result of the act of connecting. But There are minor semantic differences (‘connecting’ is a success verb) that I don’t want to mix in with what I’m saying here.

    • Throwing a wrench at me, compared to a Nerf football is going to change whatever your intended message is in connecting with me

    This assumes there is an intended ‘message’, i.e., some content in the connection. But it does not prove that there is content in the connection.

    keith.hamon said…

    Hmm … perhaps we are talking past each other. When I say that I see it useful to consider linguistic structures as a part of nature, I am saying that linguistic structures are built on, or emerge from, physical structures, and I fully recognize that the physical structures came first (I’m speaking in evolutionary time here). I also accept that physical structures and linguistic structures have different rules. I also accept that it’s often advantageous for the linguistic structures to map as precisely as possible to the physical structures, especially if we’re talking about physical structures.

    I am, then, viewing linguistic structures as emergent from various physical substrates: neuronal patterns, vocal sounds, organized marks on stones, clay tablets, papyrus sheets, and computer screens. The principle of emergence is still contested in science, but it is not uncommon. In his article Emergent Biological Principles and the Computational Properties of the Universe, Paul Davies defines it as “the appearance of new properties that arise when a system exceeds a certain level of size or complexity, properties that are absent from the constituents of the system.” My point is that linguistic structures, and consciousness in general, absolutely emerge from and depend upon physicality (or nature), and yet they also have properties that do not belong to the physical substrate from which they emerged. As you point out, linguistic constructs can contradict one another, whereas physical constructs cannot.

    This orientation means that I would not likely make a couple of the statements that you make. For instance, you say that “contradiction is not a part of the physicality of the expression.” I say that it is part and parcel of the physicality of the expression. I do not know how to form a contradiction without a physical expression. I am contradicting you now, but only as I’m typing these electro-mechanical symbols and as you are reading them. Of course, I contradicted you earlier (yesterday, in fact) as I was thinking through your comments, but even that contradiction absolutely depended upon my physical neuronal structures, among other things (I also think the coffee I had should be factored into this equation, but I won’t pursue that just now). Thus, while

    contradiction is a property of linguistic structures and not a property of its physical substrates, contradiction cannot exist without that physical substrate.

    I am, of course, sympathetic to the distinction you make between the physicality of an expression and the abstract meaning of that expression when you say that, for instance, contradiction “arises only as a result of the interpretation we place on the symbol system, as a result of how we apply truth, meaning, and other abstract properties to the expression.” However, I’ll contradict you again. I don’t think that meaning is some independent, abstract entity that “we place on the symbol system.” Rather, meaning is what emerges as we spark networks of physical neurons and build networks of physical symbols and then move those symbols through larger networks to connect with and affect others. Thus, abstract meaning can do things that physical words cannot. Thus, I do not intend to “infer from properties of the symbol system to properties of the physical system.” I do believe in what Davies calls strong emergence, in which “higher levels of complexity possess genuine causal powers that are absent from the constituent parts. That is, wholes may exhibit properties and principles that cannot be reduced, even in principle, to the cumulative effect of the properties and laws of the components.” Thus, language can both do things that can’t be done at the physical level and it can cause things to happen at the physical level that might not have happened otherwise, but it is still an emergent feature of the physical level and part of that level. I don’t see how abstract meaning can exist without the physical, natural world. Though I can certainly imagine such a thing.

    And this brings me to the last contradiction I’ll make. You say that “it would be absurd to say ‘there are pink dragons.'” Well, yes, but only in the narrow sense that you probably used the term absurd to indicate a cognitive construct that does not map with any rigor, regularity, or reliability to any physical construct. Yet, if I allow only language that passes your test of the absurd, then I eliminate much of both poetic and rhetoric, the twin pillars of my professional and personal interests. Fact is: pink dragons do exist in literature. Well, perhaps not pink ones, but certainly the common, everyday, brown or gray type of dragon—St. George’s and Tolkein’s. To my mind, it’s absurd to say that dragons do not exist in imaginative literature, and it’s equally absurd to say that imaginative literature does not exist in nature and is not absolutely dependent upon physicality.

    Moving imaginative literature—or imagination in general—into the realm of the natural allows me to apply network principles to my study of imaginative literature. So far, I have found that very useful and productive.

    Glen said…

    @Stephen Throwing the apple represents your intention to get my attention. If you’re doing it just for fun, then it represents your idea of humor. If, like you say, you do it for no reason then whatever it represents or not (geometric shape of the universe?) is beside the point here because it’s not intentional…which Education is. This is the particular instance.

    I think I get what you mean about the “connecting” definition. Although, I still find it confusing as it seems to include actual connections, not just potential ones.

    Downes said…

    • throwing the apple represents your intention to get my attention

    No. This is just speculation on your part. I might just be practicing my aim. And that’s my point. The attribution of ‘a reason’ or some ‘meaning’ to my action is something you are bringing to it, not something that was inherent in the act.

    Downes said…

    @Keith, I am happy to say that meaning is an emergent property of the communication or communicative act.

    I’ve thought a lot about emergence over the years.

    I think that one of the key things about emergence is that for us to say some phenomenon is emergent we have also to say that it is recognized as such.

    For example, the Jesus face 116 on Mars is an emergent property of light and rock outcrops. But it becomes a ‘Jesus face’ only if we already know about Jesus. Otherwise, it’s just random light and dark.

    In other words, an emergent property is not inherent in the system producing it, but depends entirely on the perceiver being able to recognize it. (And this leads directly to a definition of knowledge – to ‘know’ is to be able to ‘recognize’).

    Footnotes

    115 Bopelo Boitshwarelo. Proposing an Integrated Research Framework for Connectivism: Utilising Theoretical Synergies. The International Review of Research in Open and Distance Learning Volume 12, Number 3. March 2011. http://www.irrodl.org/index.php/irrodl/article/view/881/1816

    116 YouTube. A Jesus Face on Mars? Video. March 26, 2008. http://www.youtube.com/watch?v=_FDgvPJtKMo

    Moncton, July 9, 2011

     

     

    A Truly Distributed Creative System

    Posted to idc, October 11, 2007

     

    John Hopkins wrote,117 on idc:

    You cannot have a truly distributed creative system without there being open channels between (all) nodes.

    I don’t think this is true.

    Imagine an idealized communications system, where links were created directly from person to person. If all channels were open at any given time, we would be communicating simultaneously with 6 billion people. We do not have the capacity to process this communication, so it has the net effect of being nothing but noise and static. Call this the congestion problem.

    This point was first made to me by Francisco Valera in a talk at the University of Alberta Hospital in 1987 or so. He was describing the connectivity between elements of the immune system, and showed that most effective communication between nodes was obtained at less than maximal connection, a mid-way point between zero connectivity and total connectivity. Similarly, in human perception, we find that neurons are connected, not to every other neuron, but to a subset of neurons.

    What this tells me is that what defines a “truly distributed creative system” is not the number of open channels (with ‘all’ being best) but rather the structure or configuration of those channels. And in this light, I contend that there are two major models to choose from:

    • egalitarian configurations – each node has the same number of connections to other nodes
    • inegalitarian configurations – nodes have unequal numbers of connections to other nodes

    Now the ‘scale free’ networks described by Clay Shirky are inegalitarian configurations. The evidence of this is the ‘power law’ diagram that graphs the number of connections per member against the number of members having this number of connections. Very few members have a high number of connections, while very many members have a low number of connections – this is the ‘long tail’ described by Anderson.The networks are scale free because, theoretically, there is no limit to the number of connections a member could have (a status Google appears to have achieved on the internet). [*] Other inegalitarian networks have practical limits imposed on them. The network of connections between airports, for example, is an inegalitarian configuration. Chicago is connected to many more places than Moncton. But the laws of physics impose a scale on this network. Chicago cannot handle a million times more connections than Moncton, because airplanes take up a certain amount of space, and no airport could handle a million aircraft. This is another example of the congestion problem.

    What distinguishes the inegalitarian system from the inegalitarian system is it’s the number of ‘hops’ through connections required to travel from any given one member to another (this can be expressed as an average of all possible hops in the network). In a fully inegalitarian system, the maximum number of hops is ‘2’ – from one member, who has one connection, to the central node, which is connected to every other node, to the target node. In a fully egalitarian system, the maximum number of hops can be much higher (this, again, is sensitive to configuration).

    As the discussion above should have made clear, it should be apparent that fully inegalitarian systems suffer as much from congestion as fully connected systems, however, this congestion is suffered in only one node, the central node. No human, for example, could be the central node of communication for 6 billion people. This means that, while the number of hops to get from one point to another may be low, the probability of the message actually being communicated is also low. In effect, what happens is that the inegalitarian system becomes a ‘broadcast’ system – very few messages are actually sent, and they are received by everyone in one hop.

    In other words – maximal connectivity can result in the opposite of a truly distributed creative system. It can result in a maximally centralized system.

    I’m sure there’s a reference from critical theory or media theory, but what would to me define a truly distributed creative system is ‘voice’ (sometimes called ‘reach’). This could be understood in different ways: the number of people a person communicates with, the average number of people each person communicates with, the minimum number, etc. My own approach to ‘voice’ is to define it in terms of ‘capacity’. In short, any message by any person could be received by all other people in the network. But it is also defined by control. In short, no message by any person is necessarily received by all other people in the network.

    One way to talk about this is to talk about the entities in the network. When you look at Watts and Barabási, they talk about the probability that a message will be forwarded from one node to the next. This, obviously, is a property of both the message and the node. Suppose, for example, that the message is the ebola virus, and that the node is a human being. The virus is very contagious. If contracted to one person, it has a very high probability of being passed on to the next. But suppose the person is resistant. Then he or she won’t contract the virus, and thus, has a very low probability of passing it on.

    The other way to talk about this is to talk about the structure of the network. The probability of the virus being passed on increases with the number of connections. This means that in some circumstances – for example, a person with many friends – the probability of the virus being passed on is virtually certain. So in some network configurations, there is no way to stop a virus from sweeping through the membership. These networks are, specifically, networks that are highly inegalitarian – broadcast networks. Because the virus spreads so rapidly, there is no way to limit the spread of the message, either by quarantine (reducing the number of connections per carrier) or inoculation (increasing the resistance to the message).

    In order to create the truly distributed creative system, therefore, you need to:

    • limit the number of connections for any given node. This limit would be based on what might be thought of as the ‘receptor capacity’ of any given node, that is, the maximum number of messages it can receive without congestion, which in turn is, the maximum number of messages it can receive where each message has a non-zero chance of changing the state of the receptor node.
    • maximize the number of connections, up to the limit, for any given node. This might be thought of as maximizing the voice of individual nodes. What this does is to give any message from any given node a good start – it has a high probability of propagating at least one step beyond its originator. It cannot progress too fast – because of the limit to the number of connections – but within that limit, it progresses as fast as it can.
    • within these constraints, maximize the efficiency of the network – that is (assuming no congestion) to minimize the average number of hops required for a network to propagate to any other point in the network.

    These conditions combine to give a message the best chance possible of permeating the entire network, and the network the best chance possible of blocking undesirable messages. For any given message, the greatest number of people possible are in a position to offer a countervailing message, and the network is permeable enough to allow the countervailing message the same chance of being propagated.

    What sort of network does that look like? I have already argued that it is not a broadcast network. Let me take that one step further and argue that it is not a ‘hub and spokes’ network. Such networks are biased toward limiting the number of hops – at the expense of voice, and with the risk of congestion. That’s why, in hub and spoke networks, the central networks become ‘supernodes’, capable of handling many more connections than individual nodes. But this increase in capacity comes with a trade-off – an increase in congestion. This becomes most evident when the supernode attempts to acquire a voice. A centralized node that does nothing but reroute messages may handle many messages efficiently, but then the same node is used to read those messages and (say) filter them for content, congestion quickly occurs, with a dramatic decrease in the node’s capacity.

    Rather, the sort of network that results is what may be called a ‘community of communities’ model. Nodes are highly connected in clusters. A cluster is defined simply as a set of nodes with multiple mutual connections. Nodes also connect – on a less frequent basis – to nodes outside the cluster. Indeed (to take this a step further) nodes typically belong to multiple clusters. They may be more or less connected to some clusters. The propagation of a message is essentially the propagation of the message from one community to the next. The number of steps is low – but for a message to pass from one step to the next, it needs to be ‘approved’ by a large number of nodes.

    When we look at things like Wenger’s communities of practice, we see, in part, the description of this sort of network. Rather than the school-and-teacher model of professional development

    (which is a hub and spokes model) the community of practice maximizes the voice of each of its members. It can be called a cluster around a certain topic or area of interest, but the topic or area of interest does not define the community, it is rather an empirical description of the community (and thus, for example, we see people who came together as a hockey team in 1980 continue to be drinking buddies in 1990 and go on to form an investment club in 2000).

    Maximally distributed creativity isn’t about opening the channels of communication, at least, not directly. It is about each person having the potential to be a member of a receptive community, where there is a great deal of interactivity among the members of that community, and where the community, in turn, is a member of a wider community of communities. Each person thus is always heard by some, has the potential to be heard by all, and plays a role not only in the creation of new ideas, but also, as part of the community, in the evaluation and passing on of others’ ideas.

    ==

    [*] I just want to amend my previous post slightly.

    I wrote: “The networks are scale free because, theoretically, there is no limit to the number of connections a member could have…”

    This should not be confused with the definition of a ‘scale free network’, which is specifically, that “a network that is scale-free will have the same properties no matter what the number of its nodes is.”

    But the relationship between my statement and the more formal definition should be clear. If there is a limit to the number of connections created by the physical properties of the nodes, then the mathematical formula that describes one instance of the network (a small instance) cannot be used to describe all instances of the same type of network.

    Footnotes

    117 John Hopkins. Re: Notworking online collaboration in science and education. The mailing list of the Institute for Distributed Creativity. October 11, 2008. http://permalink.gmane.org/gmane.culture.media.idc/395

    Moncton, October 11, 2007

     

    The Mind = Computer Myth

    Responding to Norm Friesen:118

    If you were to read all of my work (not that I would wish that on anyone) you would find a sustained attack on two major concepts:

    1. The ‘information-theoretic’ or ‘communications theoretic’ theory of learning, and
    2. The cognitivist ‘information processing’ physical symbol system model of the mind These are precisely the two ‘myths’ that you are attacking, so I am sympathetic.

    That said, I think you have the cause-and-effect a bit backwards. You are depicting these as technological theories. And they are, indeed, models of technology.

    However, as models, these both precede the technology.

    Both of these concepts are aspects of the same general philosophy of mind and epistemology. The idea that the human mind received content from the external world and processed this content in a linguistic rule-based way is at least as old as Descartes, though I would say that it has more of a recent history in the logical positivist theory of mind. Certainly, people like Russell, Carnap and even Quine would be very comfortable with the assumptions inherent in this approach.

    Arguably – and I would argue – the design of computers followed from this theory. Computers began as binary processors – ways of manipulating ones and zeros. Little wonder that macro structures of these – logical statements – emulated the dominant theory of reasoning at the time. Computers were thought to emulate the black box of the human mind, because what else would that black box contain?

    Now that said, it seems to me that there can’t really be any denying that there is at least some transmission and reception happening. We know that human sensations result from external stimuli – sight from photons, hearing from waves of compression, and so on. We know that, once the sensation occurs, there is a propagation of signals from one neural layer to the next. Some of these propagations have been mapped out in detail.

    It is reasonable to say that these signals contain information. Not information in the propositional sense. But information in the sense that the sensations are ‘something’ rather than ‘something else’. Blue, say, rather than red. High pitched, say, rather than low pitched. And it has been a philosophical theory long before the advent of photography (it dates to people like Locke and Hume, minimally) that the impressions these perceptions create in the mind are reflections of the sensations that caused them – pictures, if you will, of the perception.

    To say that ‘the mind is like a photograph’ is again an anticipation of the technology, rather than a reaction to it. We have the idea of creating photographs because it seems to us that we have similar sorts of entities in our mind. A picture of the experience we had.

    In a similar manner, we will see future technologies increasingly modeled on newer theories of mind. The ‘neural nets’ of connectionist systems are exactly that. The presumption on the part of people like Minsky and Papert is that a computer network will in some sense be able to emulate some human cognition – and in particular things like pattern recognition. Even Quine was headed in that direction, realizing that, minimally, we embody a ‘web of belief’.

    For my own part, I was writing about networks and similarity and pattern recognition long before the internet was anything more than a gleam in my eye. The theory of technology that I have follows from my epistemology and philosophy of mind. This is why I got into trouble in my PhD years – because I was rejecting the cognitivism of Fodor, Dretske and Pylyshyn, and concordantly, rejecting the physical symbol system hypothesis advanced by people like Newell and Simon.

    I am happy, therefore, to regard ‘communication’ as something other than ‘transmission of information’ – because, although a transmission of information does occur (we hear noises, we see marks on paper) the information transmitted does not map semantically into the propositions encoded in those transmissions. The information we receive when somebody talks to us is not the same as that contained in the sentence they said (otherwise, we could never misinterpret what it was that they said).

    That’s why I also reject interpretations, such as the idea of ‘thought as dialogue’ or communication as ‘speech acts’ or even something essentially understood as ‘social interaction’. When we communicate, I would venture to say, we are reacting, we are behaving, way may even thing we are ‘meaning’ something – but this does not correspond to any (externally defined) propositional understanding of the action.

    Footnotes

    118 Norm Friesen. Myth 5: The Mind = Computer Myth. eHabitus (wweblog). March, 2007. http://www.downes.ca/post/39473 Website no longer extant. http://ehabitus.blogspot.com/2007/03/myth-5-mind-computer-myth_424.html

    Moncton, March 14, 2007

     

    What’s The Number for Tech Support?

     

    The article in Science Daily119 is pretty misleading when it says “Human Brain Region Functions Like Digital Computer”. To most people, to function like a computer is to function based on a series of instructions encoded into some (binary) language. This in turn leads to the idea that the brain is like a computer program.

    This, of course, is precisely not what O’Reilly is saying in the article (unfortunately not available online, though you can find numerous others of his articles.120 Probably the recent online article most corresponding to the Science article is Modeling Integration and Dissociation in Brain and Cognitive Development121).

    In this article he is pretty specific about how the brain represents conceptual structures. “Instead of viewing brain areas as being specialized for specific representational content (e.g., color, shape, location, etc.), areas are specialized for specific computational functions by virtue of having different neural parameters… This ‘functionalist’ perspective has been instantiated in a number of neural network models of different brain areas, including posterior (perceptual) neocortex, hippocampus, and the prefrontal cortex/basal ganglia system… many aspects of these areas work in the same way (and on the same representational content), and in many respects the system can be considered to function as one big undifferentiated whole. For example, any given memory is encoded in synapses distributed throughout the entire system, and all areas participate in some way in representing most memories.”

    This is tricky, but can be broken down. Basically, what he is proposing is a functionalist architecture over distributed representation.

    Functionalism122 in the philosophy of mind is the doctrine that what makes something a mental state of a particular type does not depend on its internal constitution, but rather on the way it functions, or the role it plays, in the system of which it is a part.”

    For example, when I say, “What makes something a learning object is how we use the learning object,” I am asserting a functionalist approach to the definition of learning objects (people are so habituated to essentialist definitions that my definition does not even appear on lists of definitions of learning objects).

    It’s like asking, what makes a person a ‘bus driver’? Is it the colour of his blood? The nature of his muscles? A particular mental state? No – according to functionalism, what makes him a ‘bus driver’ is the fact that he drives buses. He performs that function.

    “A distributed representation123 is one in which meaning is not captured by a single symbolic unit, but rather arises from the interaction of a set of units, normally in a network of some sort.”

    As noted in the same article, “The concept of distributed representation is a product of joint developments in the neurosciences and in connectionist work on recognition tasks (Churchland and Sejnowski 1992124). Fundamentally, a distributed representation is one in which meaning is not captured by a single symbolic unit, but rather arises from the interaction of a set of units, normally in a network of some sort.”

    To illustrate this concept, I have been asking people to think of the concept ‘Paris’. If ‘Paris’ were represented by a simple symbol set, we would all mean the same thing when we say ‘Paris’. But in fact, we each mean a collection of different things, and none of our collections is the same.

    Therefore, in our own minds, the concept ‘Paris’ is a loose association of a whole bunch of different things, and hence the concept ‘Paris’ exists in no particular place in our minds, but rather, is scattered throughout our minds.

    Now what the article is saying is that human brains are like computers – but not like the computers as described above, with symbols and programs and all that, but like computers when they are connected together in a network.

    “The brain as a whole operates more like a social network than a digital computer… the computer-like features of the prefrontal cortex broaden the social networks, helping the brain become more flexible in processing novel and symbolic information.” Understanding ‘where the car is parked’ is like understanding how one kind of function applies on the brain’s distributed representation, while understanding ‘the best place to park the car’ is like how a different function applies to the same distributed representation.

    The analogy with the network of computers is a good one (and people who develop social network software are sometimes operating with these concepts of neural mechanisms specifically in mind). The actual social network itself – a set of distributed and interlinked entities, usually people, as represented by websites or pages – constitutes a type of distributed representation. A ‘meme’ – like, say, the Friday Five125– is distributed across that network; it exists in no particular place.

    Specific mental operations, therefore, are like thinking of functions applied to this social network. For example, if I were to want to find ‘the most popular bloggers’ I would need to apply a set of functions to that network. I would need to represent each entity as a ‘linking’ entity. I would need to cluster types of links (to eliminate self-referential links and spam). I would then need to apply my function (now my own view here, and possibly O’Reilly’s, though I don’t read it specifically in his article, is that to apply a function is to create additional neural layers that act as specialized filters – this would contrast with, say, Technorati, which polls each individual entity and then applies an algorithm to it).

    The last bit refers to how research is conducted in such environments. “Modeling the brain is not like a lot of science where you can go from one step to the next in a chain of reasoning, because you need to take into account so many levels of analysis… O’Reilly likens the process to weather modeling.”

    This is a very important point, because it shows that traditional research methodology, as employed widely in the field of e-learning, will not be successful. This becomes even more relevant with the recent emphasis on ‘evidence-based’ methodology, such as the Campbell Collaboration.126 This methodology, like much of the same type, recommends double-blind tests measuring the impacted of individual variables in controlled environments. The PISA samples127 are an example of this process in action.

    The problem with this methodology is that if the brain (and hence learning) operates as described by O’Reilly (and there is ample evidence that it does) then concepts such as ‘learning’ are best understood as functions applied to a distributed representation, and hence, will operate in environments of numerous mutually dependent variables (the value of one variable impacts the value of a second, which impacts the value of a third, which in turn impacts the value of the first, and so on).

    As I argue in papers like Public Policy, Research and Online Learning128 and Understanding PISA129 the traditional methodology fails in such environments. Holding one variable constant, for example, impacts the variable you are trying to measure. This is because you are not merely screening the impact of the second variable you are screening the impact of the first variable on itself (as transferred through the second variable). This means you are incorrectly measuring the first variable.

    Environments with numerous mutually dependent variables are known collectively as chaotic systems.130 Virtually all networks are chaotic systems. Classic examples of chaotic systems are the weather system and the ecology. In both cases, it is not possible to determine the long-term impact of a single variable. In both cases, trivial differences in initial conditions can result in significant long-term differences (the butterfly effect131).

    This is a significant difference between computation and neural networks. In computation (and computational methodology, including traditional causal science) we look for specific and predictable results. Make intervention X and get result Y. Neural network and social network theory do not offer this. Make intervention X today and get result Y. Make intervention X tomorrow (even on the same subject) and get result Z.

    This does not mean that a ‘science’ of learning is impossible. Rather, it means that the science will be more like meteorology than like (classical) physics. It will be a science based on modelling and simulation, pattern recognition and interpretation, projection and uncertainty. One would think at first blush that this is nothing like computer science. But as the article takes pains to explain, it is like computer science – so long as we are studying networks of computers, like social networks.

    Thanks Guy Levert for the email question that prompted the title and the remainder of this post.

    Footnotes

    119 University of Colorado at Boulder (2006, October 5). Human Brain Region Functions Like Digital Computer. ScienceDaily. Retrieved April 1, 2012, from http://www.sciencedaily.com/releases/2006/10/061005222628.htm

    120 Randall C. O’Reilly. Online Publications. Website. Accessed October 7, 2006. http://psych.colorado.edu/~oreilly/pubs-online.html

    121 Randall C. O’Reilly. Modeling Integration and Dissociation in Brain and Cognitive Development. In Y. Munakata & M.H. Johnson (Eds) Processes of Change in Brain and Cognitive Development: Attention and Performance XXI. Oxford University Press. 2007. http://psych.colorado.edu/~oreilly/pubs-online.html

    122 Stanford Encyclopedia of Philosophy. Functionalism. First published Tue Aug 24, 2004; substantive revision Mon Apr 6, 2009. http://plato.stanford.edu/entries/functionalism/

    123 Distributed Representation. Washington University in St. Louis. Original Citation no longer extant. http://artsci.wustl.edu/~philos/MindDict/distributedrepresentation.html See instead the University of Waterloo, http://philosophy.uwaterloo.ca/MindDict/distributedrepresentation.html

    124 Patricia Smith Churchland and Terrence Joseph Sejnowski. The Computational Brain. MIT Press. 1992. http://books.google.ca/books/about/The_Computational_Brain.html?id=wVll6u0tzXoC&redir_esc=y

    125 Ariestess. The Friday Five. Website. Accessed October 7, 2006. http://thefridayfive.livejournal.com/

    126 The Campbell Collaboration. Website. http://www.campbellcollaboration.org/

    127 OECD. Programme for International Studetn Assessment (PISA). Website. http://www.pisa.oecd.org/pages/0,2987,en_32252351_32235731_1_1_1_1_1,00.html

    128 Stephen Downes. Public Policy, Research and Online Learning. Stephen’s Web (weblog). May 6, 2003. http://www.downes.ca/post/60

    129 Stephen Downes. Understanding PISA. Stephen’s Web (weblog). November 30, 2004. http://www.downes.ca/post/17

    130 Larry Gladney. Chaotic Systems. The Interactive Textbook for Mathematics, Physics, and Chemistry. Accessed October 7, 2006, but no longer extant. http://www.physics.upenn.edu/courses/gladney/mathphys/subsection3_2_5.html

    131 Wikipedia. Butterfly Effect. Accessed October 7, 2006. http://en.wikipedia.org/wiki/Butterfly_effect

    Moncton, October 07, 2006

     

     

    Informal Learning: All or Nothing

    Responding to Jay Cross, All or Nothing.132

    It makes me think of the strategy employed by the Republican right.

    Most people prefer to be somewhere in the middle on a sliding scale, and political opinion is no different.

    So what the Republicans did, through the use of extreme viewpoints like Rush Limbaugh, Anne Coulter and Pat Robertson, is to shift the scale off far to the right!

    So now their former position – a hard right conservatism – now occupies the centre. And becomes the default choice. That’s how we see ‘balance’ attained on talk shows by having two shades of right wing represented.

    You (Jay Cross) are doing pretty much the same thing here. Take, for example, the scale between ‘hours’, ’15 minutes’, ‘3 minutes’. Well the centre and the right are both informal learning selections. Why not a scale that represents the choices I had as an instructor: ‘3 hours’,’1.5 hours’, ’50 minutes’?

    What’s interesting is that the other thing you’re doing (and George Siemens does this too, and I just haven’t found the words to express it) is that you are co-opting the other point of view as part of your point of view.

    It’s kind of like saying, “I support informal learning, except when I don’t.” George does the same thing when he describes Connectivism: “I don’t care whether you call it social constructionism.” I am not sure how to react – are you saying there is no fundamental difference between your position and the other position?

    What is happening here is that an attempt is being made to made what is actually a fairly radical position seem moderate by saying something like, “Oh no, it’s the same thing you were doing, it’s just tweaking a few variables.”

    It’s fostering the ‘science as cumulative development’ perspective where, most properly, it should be a ‘science as paradigm shift perspective’. I don’t think it’s an accurate representation of the change that should be happening.

    Company A wants employee B to take training course Z. Who makes the decision, the company or the employee? This is a binary switch – you can’t say “they both make the decision” – that’s corporate newspeak for saying “the company does”.

    The sliding scale disguises this by using the general term ‘control’. But the point here is: either the employee is being told what to learn (some of the time, all of the time, whatever) or he or she is not. No sliding scale.

    A lot of the scales are like that. They are very reassuring for managers (to whom you have to sell this stuff, because the employees have no power or control). You are telling the managers, “You don’t have to relinquish control, it’s OK, it will still be informal learning.” But it won’t be. It will just be formal learning, but in smaller increments.

    In addition, the scales lock-in the wrong value-set. It’s like presenting the students with the option: “what kind of classroom would you like, open-concept, tables and chairs, rows of desks?” It looks like a scale, but the student never gets the choice of abandoning the classroom entirely.

    The ‘time to develop’ and the ‘author’ scales, for example, both imply some sort of ‘learning content’. What sort? As determined by the ‘content’ scale. Something that is produced, and then consumed. It is manifestly not, for example, a conversation. It creates an entity, the ‘resource’, and highlights the importance of the resource.

    The people who produce stuff will be relieved. Learning can still be about the production and consumption of learning content. They can still build full-length courses and call it ‘informal learning’.

    Everybody’s happy. Everybody can now be a part of the ‘informal learning’ bandwagon.

    What the slider scales analogy does is to completely mask the value of choosing one option or another. If you pick ‘more bass’ or ‘more treble’ there really isn’t a right or wrong answer; it’s just a matter of taste.

    But if there is something to informal learning, then there should be a sense in which you can say it’s better than the alternative. Otherwise, why tout it?

    You might say, well it is better, but there’s still those 20 percent of cases where we want formal learning.

    Supposing that this is the case, then what we want is a delineation of the conditions under which formal learning is better and those under which informal learning is better. The slider scale allows an interpretation under which everything can be set to ‘formal learning’ and it’s still OK.

    To make my point, consider the criteria I consider to be definitive of successful network learning, specifically, that networks should be:

    • decentralized
    • distributed
    • disintermediated
    • disaggregated
    • dis-integrated
    • democratic
    • dynamic
    • desegregated

    Now again, any of these parameters can be reduced to a sliding scale. ‘Democratic’ can even be reduced to four sliding scales:

    • autonomy
    • diversity
    • openness
    • connectedness

    But the underpinnings of the theory select these criteria, rather than merely random criteria, because these specify what it is better to be.

    ‘Autonomy’ isn’t simply a sliding scale. Rather, networks that promote more autonomy are better, because they are more reliable. If you opt for less autonomy, you are making the network less reliable. You aren’t simply exercising a preference; you are breaking the network.

    Now there will be cases – let’s be blunt about it – where it will be preferable to have a broken network.

    Those are cases where learning is not the priority. Where things like power and control are the priority. A person may opt to reduce autonomy because he doesn’t care whether it produces reliable results.

    There may be other cases where the choice of a less effective network is forced upon us by constraints. If it cost $100 million to develop a fully decentralized network, and $100,000 to develop a centralized network, many managers will opt for the less reliable network at a cheaper price.

    But the point here is that there is no pretence that the non-autonomous centralized systems constitute some version of network learning simply because they are, say, dynamic. For one thing, the claim is implausible – the criteria for successful network are not independent variables but rather impact on each other. And for another thing, the reduction of any of the conditions weakens the system so much that it can no longer be called network learning.

    It’s kind of like democracy. Let’s, for the same of argument, define ‘democracy’ as the set of rights in the charter of rights:

    • freedom of speech
    • freedom of the press
    • freedom of conscience
    • freedom of assembly, etc.

    Take away one of them – freedom of speech, say. Do you still have democracy? What good is freedom of the press, or freedom of assembly, without freedom of speech?

    Bottom line: If there is anything to the theory of informal learning, then the values it expresses are more than just preferences on a sliding scale.

    Representing them that way serves a marketing objective, in that it makes people who are opposed to the theory more comfortable, because it suggests they won’t really have to change anything.

    But it is either inaccurate or dishonest, because it masks the value of selecting one thing over another, and because it suggests that you can jettison part of the theory without impacting the whole.

    And in the case of the particular scales represented here, the selection locks people into a representation of the theory that is not actually characteristic of the theory. Specifically, it suggests that informal learning is just like formal learning in that it is all about the production and consumption of content.

    And I think this whole discussion points to the dilemma that any proponent of a new theory faces: whether to stay true to the theory as conceived, or whether to water down the theory in order to make it more palatable to consumers and clients (some of whom my have a vested interest in seeing the theory watered down).

    And it seems to me, the degree to which you accept the watering down if the theory, is the degree to which you do not have faith in it.

    If informal leaning really about duration, content, timing and the rest? Probably not. But if not, then what is it about? What are the values expressed by the theory?

    Footnotes

    132 Jay Cross. All or Nothing. Informal Learning Blog. February 9, 2007. http://informl.com/?p=702

    Moncton, February 10, 2007

     

    Non-Web Connectivism

     

     

    Responding to this thread…133

    (One of the things I really dislike about Moodle is that I have to use the website to reply to a post- I get it in my email, I’d rather just reply in my email.) Anyhow…

    It occurs to me on reading this that the assembly line can and should be considered a primitive form of connectivism. It embodies the knowledge required to build a complex piece of machinery, like a car. No individual member of the assembly line knows everything about the product. And it is based on a mechanism of communication, partially symbolic (through instructions and messages) and partially mechanical (as the cars move through the line).

    The assembly line, of course, does not have some very important properties of connectivist networks, which means that it cannot adapt and learn. In particular, its constituent members are not autonomous. So members cannot choose to improve their component parts. And also, assembly line members must therefore rely on direction, increasing the risk they will be given bad instructions (hence: the repeated failures of Chrysler). Also, they are not open (though Japanese processes did increase the openness of suppliers a bit).

    It is important to keep in mind, in general, that not just any network, and not just any distributed knowledge, qualifies as connectivist knowledge. The radio station example in particular troubles me. It is far too centralized and controlled. In a similar manner, your hard drive doesn’t create an instance of connective knowledge. Yes, you store some information there. But your hard drive is not autonomous, it cannot opt to connect with other sources of knowledge, it cannot work without direction. It doesn’t add value – and this is key in connectivist networks.

    Response: Jeffrey Keefer

    Stephen, when you said “But your hard drive is not autonomous, it cannot opt to connect with other sources of knowledge, it cannot work without direction. It doesn’t add value – and this is key in connectivist networks,” you seem to be speaking about people who have the freedom to act independently toward a goal, which is something that those on the assembly line in your earlier example are not necessarily free or encouraged to do. If they are directed and not free, it seems that they are more like independent pieces of knowledge or skills that strategically placed together make something else. If that can be considered connectivism, then what social human endeavor (from assembling food at a fast food restaurant to preparing a team-based class project to conducting a complex surgical procedure) would not be connectivist?

    Yeah, I was thinking that as I ended the post but didn’t want to go back and rewrite the first paragraph.

    Insofar as connectivism can be defined as a set of features of successful networks (as I would assert) then it seems clear that things can be more or less connectivist. That it’s not an off-on proposition.

    An assembly line, a fast-food restaurant — these may be connectivist, but just barely. Hardly at all. Because not having the autonomy really weakens them; the people may as well be drones, like your hard drive. Not much to learn in a fast food restaurant.

    One of the things to always keep in mind is that connectivism shows that there is a point to things like diversity, autonomy, and the other elements of democracy. That these are values because networks that embody them are more reliable, more stable, can be trusted. More likely to lead, if you will, to truth.

    Karyn Romeis writes:

    What I really am struggling with is this: “The radio station example in particular troubles me. It is far too centralized and controlled.” Please, please tell me that you did not just say “Let them eat cake”.

    I presume that the people who make those calls to the radio station do so because they have no means of connecting directly to the electronic resources themselves. Perhaps they do not even have access to electricity. In the light of this, they might be expected to remain ignorant of the resources available to them. However, they have made use of such technology as is available to them (the telephone) to plug into the network indirectly. They might not be very sophisticated nodes within the network, but they are there, surely? It might be clunky, but under the circumstances, it’s what they have:

    connection to people who have connection to technology. Otherwise we’re saying that only first world people with direct access to a network and/or the internet can aspire to connectivism. Surely there is space for a variety of networks?

    What concerns me about the use of radio stations is the element of control. It is no doubt a simple fact that there are things listeners cannot ask about via the radio method. And because radio is subject to centralized control, it can be misused. What is described here is not a misuse of radio – it actually sounds like a very enlightened use of radio. But we have seen radio very badly misused, in Rwanda, for example.

    You write, “Otherwise we’re saying that only first world people with direct access to a network and/or the internet can aspire to connectivism. Surely there is space for a variety of networks?” I draw the connection between connectivism and democracy very deliberately, as in my mind the properties of the one are the properties of the other. So my response to the question is that connectivism is available to everyone, but in the way that democracy is available to everyone.

    And what that means is that, in practice, some people do not have access to connectivist networks. My observation of this fact is not an endorsement.

    Yes, there is a space for a variety of networks. In fact, this discussion raises an interesting possibility. Thus far, the networks we have been talking about, such as the human neural network in the brain, or the electronic network that forms the internet, are physical networks. The structure of the network is embodied in the physical medium. But the radio network, as described above, may be depicted as a network. The physical medium – telephone calls and a radio station – are not inherently a network, but they are being used as a network.

    Virtual networks allow us to emulate the functioning of, and hence get the benefit of, a network. But because the continued functioning of the network depends on some very non-network conditions (the benevolence of the radio station owner, for example) it should be understood that such structures can very rapidly become non-networks.

    I would like also in this context to raise another consideration. That is related to the size of the network. In the radio station example described, at best only a few hundred people participate directly. This is, in the nature of things, a very small network. The size of the network does matter, as various properties – diversity, for example – increase through size increases. As we can easily see, a network consisting of two people cannot embody as much knowledge as a network consisting of two thousand people, much less two million people.

    In light of this, I would want to say that the radio station example, at best, is not the creation of a network, but rather, the creation of an extension of the network. If the people at the radio station could not look up the answers on Google, the effectiveness of the call-in service would be very different. So it seems clear here that physical networks can be extended using virtual networks.

    This is somewhat like what George means when he says that he stores some of his knowledge in other people (though it is less clear to me that he intends it this way). His knowledge is stored in a physical network, his neural net, aka his brain. By accessing things like the internet, he is

    able to expand the capacity of his brain – the internet becomes a virtual extension of the physical neural network.

    Note that this is not the same as saying that the social network, composed of interconnected people, is the same as the neural network. They are two very different networks. But because they have the same structure, a part of one may act as a virtual extension of the other.

    This, actually, resembles what McLuhan has to say about communications media. That these media are extensions of our capacities, extensions of our voices and extensions of our senses. We use a telescope to see what we could not see, we use a radio to hear what we could not hear. Thought of collectively, we can use these media to extend our thought processes themselves. By functioning as though it were a brain, part of the wider world, virtually, becomes part of our brain.

    Responding to Glen Gatin, who wrote a longish post:

    Jurgen Habermas talks about communicative action in the public sphere as an essential component of democracy. I see the process that we are using( and discussing) as a form of communicative action and discussion groups such as this are exemplars of the activity that Habermas championed. I hope someone more versed in sociological theory can clarify because it seems that some of the conditions that got Jurgen thinking are coming around again. (excellent Habemas interview video on YouTube)

    The other point picks up on Stephens comment about George’ comment regarding storing knowledge or data in other people. Societies have always done that, from the guys that memorize entire holy texts, elders/hunters/ warriors in various societies as repositories of specialized wisdom.

    Society relies on implicit skills and knowledge, the kind that can’t be written down. Julian Orr’s fabulous thesis “Talking About Machines, An Ethnography of a Modern Job” describes the types of knowledge that can’t be documented, must be stored in other people. He points out that you can read the company manual but knowledge doesn’t come until coffee time (or the bar after work) when one of the old timers tells you what it really means. Narrative processes are key. Developing the appropriate, context- based skill sets for listening to the stories, to extract the wheat from the chaff, is a critical operation in informal learning.

    Storing knowledge is what Academia was partly about, storing the wisdom of western civilization in the minds of society’s intellectuals and paying considerable amounts of public monies to have them process and extended our collective knowledge.

    All through, there are examples of the mechanisms necessary to access and participate in collective wisdom. You have to know the code, speak the language, use the proper forms of address, make the proper sacrifices, say the proper prayers, use APA format, enter the proper username and password. The internet expands the possibilities of this function as humans evolve toward a collective consciousness a la Teilhard de Chardin’s noosphere. Welcome to Gaia.

    First, a lot of people have talked about the importance of discourse in democracy. We can think of Tocqueville, for example, discussing democracy in America. The protections of freedom of speech and freedom of assembly emphasize its importance.

    And so, Habermas and I agree in the sense that we both support the sorts of conditions that would enable an enlightened discourse. openness and the ability to say whatever you want, for example. But from there we part company.

    For Habermas, the discourse is what produces the knowledge, the process of arguing back and forth. Knowledge-production (and Habermas intended this process to produce moral universals) is therefore a product of our use of language. It is intentional. We build or construct (or, at least, find) these truths.

    I don’t believe anything like this (maybe George does, in which case we could argue over whether it constitutes a part of connectivism 😉 ). It is the mere process of communication, whether codified intentionally in a language of discourse or not, that creates knowledge. And the knowledge isn’t somehow codified in the discourse, rather, it is emergent, it is, if you will, above the discourse.

    Also, for Habermas, there must be some commonality of purpose, some sense of sharing or group identity. There are specific ‘discourse ethics’. We need to free ourselves from our particular points of view. We need to evaluate propositions from a common perspective. All this to arrive at some sort of shared understanding.

    Again, all this forms no part of what I think of connectivism. What makes the network work is diversity. We need to keep our individual prejudices and interests. We should certainly not subsume ourselves to the interests of the groups. If there are rules of arguing, they are arrived at only by mutual consent, and are in any case arbitrary and capricious, as likely as not to be jettisoned at any time. And if there is an emergent ‘moral truth’ that arises out of these interactions, it is in no way embodied in these interactions, and is indeed seen from a different perspective from each of the participants.

    Now, also, “The other point picks up on Stephen’s comment about George’ comment regarding storing knowledge or data in other people. Societies have always done that, from the guys that memorize entire holy texts, elders/hunters/ warriors in various societies as repositories of specialized wisdom.”

    This sort of discourse suggests that there is an (autonomous?) entity, ‘society’, that uses something (distinct from itself?), an elder, say, to store part of its memory. As though this elder is in some sense what I characterized as a virtual extension of a society.

    But of course, the elder in question is a physical part of the society. The physical constituents of society just are people (“Society green It’s made of people!!”) in the same way that the

    physical constituents of a brain network are individual neurons. So an elder who memorizes

    texts is not an extension of society, he or she is a part of society. He or she isn’t ‘used’ by society to think, he or she is ‘society thinking’. (It’s like the difference between saying “I use my neurons to think” and “my neurons think”).

    Again, “Society relies on implicit skills and knowledge, the kind that can’t be written down. Julian Orr’s fabulous thesis “Talking About Machines, An EthnograpGranovetterhy of a Modern Job” describes the types of knowledge that can’t be documented, must be stored in other people.”

    This seems to imply that there is some entity, ‘society’, that is distinct from the people who make up that entity. But there is not. We are society. Society doesn’t ‘store knowledge in people’, it stores knowledge in itself (and where else would it store knowledge?).

    That’s why this is just wrong: “the mechanisms necessary to to access and participate in collective wisdom. You have to know the code, speak the language, use the proper forms of address, make the proper sacrifices, say the proper prayers, use APA format, enter the proper username and password. The internet expands the possibilities of this function as humans evolve toward a collective consciousness ala Teilhard de Chardin’s noosphere. Welcome to Gaia.”

    There are no mechanisms ‘necessary’ in order to access and participate in the collective wisdom. You connect how you connect. Some people (such as myself) access via writing posts. Other people (such as George) access via writing books. Other people (such as Clifford Olsen) access via mass murder. Now George and I (and the rest of us) don’t like what Clifford Olsen did. But the very fact that we can refer to him proves that you can break every standard of civilized society and still be a part of the communicative network. Because networks are open.

    A network isn’t like some kind of club. No girls allowed. There’s no code, language, proper form of address, format, username or password. These are things that characterize groups. The pervasive use of these things actually breaks the network. How, for example, can we think outside the domains of groupthink if we’re restricted by vocabulary or format?

    The network (or, as I would say, I well-functioning network) is exactly the rejection of codes and language, proper forms of address, formats, usernames and passwords. I have a tenuous connection (as Granovetter would say, ‘weak ties’) with other members of the network, formed on the flimsiest of pretexts, which may be based on some voluntary protocols. That’s it.

    From the perspective of the network, at least, nothing more wanted or desired (from our perspective as humans, there is an emotional need for strong ties and a sense of belonging as well, but this need is distinct and not a part of the knowledge-generating process).

    To the extent that there is or will be a collective consciousness (and we may well be billions of entities short of a brain) there is no reason to suspect that it will resemble human consciousness and no reason to believe that such a collective consciousness will have any say (or interest) in the functioning of its entities. Do you stay awake at night wondering about the moral turpitude of each of your ten billion neurons? Do you even care (beyond massive numbers) whether they live or die?

    Insofar as a morality can be derived from the functioning of the network, it is not that the network as a whole will deliver unto us some universal moral code. We’re still stuck each fending for ourselves; no such code will be forthcoming.

    At best, what the functioning of the network tells us about morality is that it defines that set of characteristics that help or hinder its functioning as a network. But you’re still free to opt out; there’s no moral imperative that forces you to help Gaia (there’s no meaning of life otherwise, though, so you may as well – just go into it with your eyes open, this is a choice, not a condition). Be, it might be said, the best neuron you can be, even though the brain won’t tell you how and doesn’t care whether or not you are.

    This is what characterizes the real cleave between myself and many (if not most) of these other theorists. They all seem to want to place the burden of learning, of meaning, if morality, of whatever, into society. As though society cares. As though society has an interest. As though society could express itself. The ‘general will’, as Rousseau characterized it, as though there could be some sort of human representation or instantiation of that will. We don’t even know what society thinks (if anything) about what it is (again – ask yourself – how much does a single neuron know about Descartes?). Our very best guesses are just that — and they are ineliminably representations in human terms of very non human phenomena.

    Recall Nietzsche. The first thing the superman would do would be to eschew the so-called morality of society. Because he, after all, would have a much better view of what is essentially unknowable. The ease with we can switch from saying society requires something to saying society requires a different things demonstrates the extent to which our interpretations of what society has to say depend much more on what we are looking for than what is actually there.

    Footnotes

    133 Virginia Yongers. Context filter-Business and Workplace Education/Training. Online Connectivism Conference. February 2, 2007. http://ltc.umanitoba.ca/moodle/mod/forum/discuss.php?d=45

    Moncton, February 20, 2007

     

    Meaning, Language and Metadata

    Principles of Distributed Representation

    This is the edited text of a talk delivered at the EDUCAUSE Seminars on Academic Computing, Snowmass, Colorado, August 9, 2005.

     

    Introduction

    Thank you. It’s a pleasure to be here. I have my water. I need my water. You know, I feel a little bit out of my element here. I live at sea level and so I don’t spend a whole lot of time at this altitude. So if I sit down part way through my talk…

    And I’m also a bit out of my element because I’m kind of outside EDUCAUSE134, I live in another country, three time zones away, and being a bit on the outside, I come here, I come here to a talk like this and it’s almost like kind of coming to see the establishment. And I don’t get a chance to talk to this particular group of people a whole lot, and so, what am I going to say?

    And of course I’m here at this conference135 in this beautiful location136 – and thank you so much for inviting me – and thirty years of tradition, and I’m wondering, you know, how can I do anything different, be distinct, because you know I’m kind of an outsider, and I was thinking, yesterday, that maybe I should do something like quote some Che Guevara137 or something like that, and I thought that, and I looked through but couldn’t find any good quotes.

    I did bring the book. So I figure, I’m probably the only person at this conference to wave a copy of Che Guevara138 at the podium, and if not, then I’m probably in very good company.

    OK, well I thought that was pretty funny. It took me a long time to come up with that.

    Today’s talk is ‘Principles of Distributed Representation’ and when you looked at the outline you were probably thinking to yourself, “Oh no, another metadata paper.” And if you’re like me you’re probably tired of metadata papers. And today’s talk is sort of about metadata, and I will talk about metadata because I did kind of promise that I would, and I’ve learned from hard hard experience that you really should talk about whatever’s in the abstract.

    But today’s talk is also not about metadata. It’s about knowledge, it’s about the changing model or picture of learning that new technology brings to us, it talks about networks in specific, and then, near the end, I begin to apply this to metadata. Now, I’m going to talk a lot about things that aren’t metadata, therefore, but as I go through this you should be thinking as I go along that each thing that I say is about metadata. I know that doesn’t make a whole lot of sense, but it’ll come a bit clearer, I hope it come’s clearer.

    Knowledge: The Traditional Theory

    We begin with knowledge, and I have to begin with knowledge because my training is in philosophy, and if I don’t begin with knowledge I’m kind of lost, like in a rowboat without a paddle, or whatever. This would be different here.

    And we have this picture, don’t we, of what knowledge is. And the picture – I’ve sort of caricatured it on the screen here – knowledge is like entities in the brain corresponding to sentences like, ‘Paris is the capital of France.’ And if somebody asks you, ‘What is your knowledge like?’, ‘Paris is the capital of France,’ you probably talk about the sentence, and the meanings of the words, and how the words go together, and the syntax and forms of grammar.

    And there’s a fairly established theory, and some of the more recent writers, people like Chomsky139 and Fodor140, talk about the, if you will, the writing in the brain. And we can think about that literally, people like Fodor think about that literally, or we can think about that metaphorically, but even so, that is kind of the picture of knowledge that we have.

    It is what I sometimes think of as the ‘information theoretic view’ where communication involves getting a bit of knowledge, like that sentence, ‘Paris is the capital of France,’ from point A to point B. From professor to student. From speaker – me – to you – people in the audience.

    And the whole theory of distance learning is wrapped up around this concept, so you get for example Moore’s concept of transactional distance where you try to bridge this gap. You know, there’s been so many conferences, ‘bridging the gap’141 between this and that, and we try to improve the communication and create interaction – when I read about interaction, I do have a background in computers, I think ‘checksum’142, oh yeah, he’s invented checksums.

    Moore,143 transactional distance: “the physical separation that leads to a psychological and communications gap, a space of potential misunderstanding between the inputs of instructor and those of the learner.” So it’s communication theoretic, isn’t it, and they’re talking signals sent and signals received, noise, feedback, all of that sort of thing.

    That’s the traditional picture. In effect, knowledge is like sentences. Those of you who are familiar with RDF,144 you’re probably all familiar with RDF, the ‘subject verb object’ type of formation. Vocabulary in a language is unambiguous. The fact that you invited me from another country three time zones away, you presumed that when I used words, I’d probably use words much the same way you use words, and if I said the word ‘Paris’ you’d pretty much get what I meant. I depend on that sometimes, and I’m certainly depending on that at the moment, because otherwise it would be like that commercial where I’m talking Russian or something.

    Description is pretty much concrete. This is a bottle of water. This has a blue lable. A ‘horse’ is a hourse (of course, of course). Ah, I couldn’t resist. Yeah you get on a roll when you’re typing these slides. And that has gotten me in trouble before. I don’t always delete… anyhow.

    Revising The Traditional Picture

    But, none of this is true. And not only is it not true empirically, it can’t be true. Because, if it were true, then context would have no effect on truth or meaning. But context sensitivity is everywhere. And I’ve sort of spewed a list of references there for you.

    Wittgenstein145: meaning is use. Quine146: the indeterminacy of reference, the indeterminacy of language. When a native points to something and says ‘gavagai’ does he mean ‘rabbit’ or does he mean ‘spirits of my ancestor’?

    van Fraassen147: scientific explanation. ‘A is the cause of B’ can only be understood in the context of an alternative event, C. Why did the plants grow? Well the plants grow because we put seeds in the ground. As opposed to, the plants grow because I put fertilizer in the ground. As opposed to, the plants grow because, well, there’s photosynthesis, and there’s sunlight, and all of that. As opposed to, the plants grow because God wills it. The explanation depends on your context.

    Hanson148: causation. What was the cause of the accident? Well, it was the brakes, it was the drunken driver, it was the bush at the side of the road. George Lakoff149: categorization.

    Different cultures organize the world different ways. There is indeed, says Lakoff, a culture out there that classifies ‘women, fire and dangerous things’ as one category, and everything else as another category.

    Robert Stalnaker150, David Lewis151: modality, the logics of necessity and possibility. They’re based on the most similar possible world. But what makes a possible world the most similar? Well that depends on how you view the world that you’re in.

     

    What we know, crucially, depends on our point of view. Now I tried to come up with a bit of a diagram here, this is a new one for me, but, in the centre there, that’s reality, properly so-called, and then around the outside of that diagram we have four points of view and you can see that as we each look at reality from out different point of view our view of reality is slightly different, which I’ve represented by reorganizing the letters in the little boxes.

    But in fact, all we have is our point of view, all we have are the things in the little boxes. And language, which is what we use to try to get at what’s in the middle, is at best an approximation, and at worst a parody of what knowledge is actually there.

     

    Now that’s a hard concept. So I’m going to draw it out a bit. Some of the implications of this. And again, remember, I’m talking about knowledge, but I’m also talking about metadata.

     

    Implications of the Revised Theory

    1. Knowledge is subsymbolic. That is to say, what we know is not isomorphic with the words that express what we know. Another way of saying the same thing is, and those of you who are educators I’m sure have seen this in practice, the mere possession of the words is not the same as knowing something. The knowing of something depends not simply on the words but on the application of the words in the appropriate context.

    And since I’m… I’ll refer to Michael Polanyi152 here as well, and point out that a lot of knowledge indeed cannot be expressed in words, personal knowledge, tacit knowledge, the skill of how to throw a dart. Believe me, if that knowledge could be expressed in words, I would be a good dart player.

    2. Second, crucially, knowledge is distributed. There is no specific entity that constitutes the knowledge that ‘Paris is the capital of France.’ Now think about how that contrasts with the picture I drew at the beginning of this talk, where we have this thing in our mind that’s the knowledge that Paris is the capital of France. Well that knowledge doesn’t occupy a particular place in the mind. It’s spread out, it’s in billions of neurons.

    But not only that, it’s not even completely entirely contained in the mind. My knowledge that ‘Paris is the capital of France’ is, partially, contained in you. Because I need to know what the word ‘Paris’ means, what the concept of a ‘capital’ is, what the word ‘is’ is; the Oxford English Dictionary has, what, fifteen pages trying to define the word ‘is’. There is no given person who has that particular paradigm bit of knowledge ‘Paris is the capital of France’.

    Now I know it sounds unintuitive, so let me give you a slightly more intuitive way, an intuitive way, of representing this. This morning, if you were awake, and I sincerely hope you weren’t, we saw the space shuttle153 come in for a landing. And it did in fact land. Rock and roll; we like that.

    Where does the knowledge of how to launch, fly and land a shuttle reside? What person has this knowledge? And clearly, as soon as you reflect on that, you realize, nobody. Nobody could. There is so much involved in the launching, flying and landing of a shuttle that no one person could possibly have that knowledge. Some people know how to make shuttle tires. Other people know how to make shuttle tiles. Other people know how to do the launch sequence, somebody knows how to do that countdown, ’10, 9, 8…’ I guess it’s a skill. Somebody in the shuttle knows how to go out of the shuttle and pull the little bit of paper out from in between the tiles.

    Somebody else knows… and you get the idea.

    What I’m saying is that all knowledge is like that, not just the complicated stuff, because, again, this is, like, my background in philosophy, as soon as you begin pushing even the simple stuff, like ‘Paris is the capital of France’, it gets really complicated in a hurry. What do you mean by ‘capital’? What do you mean by ‘is’?

    3. Knowledge is interconnected. This is very different from the traditional picture. The traditional picture, you have a sentence, ‘Paris is the capital of France’, that’s it, you’re done, you’ve got your knowledge. But ‘Paris is the capital of France’ – that bit of knowledge is actually a part of other bits of knowledge, and other bits of knowledge are part of the knowledge that ‘Paris is the capital of France’.

    The knowledge that ‘countries have capitals’ is part of that knowledge. The sentence ‘Paris is the capital of France’ wouldn’t make any sense to you if countries didn’t have capitals. And it’s playing with these sorts of connections that is the basis for a whole lot of jokes. “What’s the capital of France? About 23 dollars.” That sort of thing, and you mess around with the preconceived understandings of the words.

    Even sentences like ‘ducks are animals’ are related, in a complex chain, to the sentence ‘Paris is the capital of France’, it’s like Quine says154, it’s a web.

    4. Knowledge is personal. And you probably if you go to knowledge management conference you hear Polanyi Polanyi Polanyi and they talk about, oh let’s extract all this tacit knowledge155 and we’ll put it in a database, and, if you read Polanyi, it’s exactly what you can’t do, because the knowledge that’s in your head, it’s embedded, it’s personal, it’s sitting there in a context. If you pull it out and put it up, it doesn’t make sense any more.

    Your belief that ‘Paris is the capital of France’ is quite literally – I don’t mean this metaphorically – it’s literally different from my belief that ‘Paris is the capital of France’. And if you think about it, think about the word ‘Paris’. All right. How many of you thought about the word ‘plaster’? One, two? OK. How many of you thought about the word ‘Hilton’?

    Now, I’ve used two examples here, we got a few people raising their hands, and everyone else not raising their hands, and those are the first two things that come up in my mind, and I’m wondering – you know what I said, I’m out of my element here, right? – I say the word ‘Paris’ I have certain associations, you say ‘Paris’, you have different associations, and now I’m wondering what they are.

    I have one set of thoughts when I think of ‘Paris’, you have (a) different set of thoughts, why aren’t they the same? If knowledge is according to that traditional picture, they should be the same. If I mean ‘Paris’ I mean the same exact same thing as you. But it’s clearly and evidently not the case.

    5. Fifth. Knowledge is emergent. And, yeah, I know, we’ve got Steven Johnson156 and others, and emergent this, and emergent that, it’s the new buzzword. The knowledge that ‘Paris is the capital of France’, we have this kind of abstract idea that we share, the knowledge that ‘Paris is the capital of France’, the Platonic ideal almost that we’re trying to get at, and what I’m saying here is that this concept is emergent from the many individual bits of knowledge inside all of yourselves that ‘Paris is the capital of France’.

    Now the thing about emergence, and I don’t see people write about this, maybe it’s me but I don’t know, but maybe I’m just naive, emergence is not a causal phenomenon. Well, yeah, OK, it is a causal phenomenon, you go to the micro levels and bits and atoms and all of that, and draw a causal picture, but the causal picture is so complicated nobody could understand it, it’s like the weather is a causal picture but who’s going to draw the line from this to this to this and make an accurate prediction forty-three years from now? It’s not going to happen.

    But at the higher level, emergence is a phenomenon of recognition. You need a viewer. You need a perceiver. You don’t get away without having one. Think about a picture of Richard Nixon on the television. You see the television, well, what you really see are all those little pixels. And you know this, you’ve heard this story before, you look at all those, and the way those pixels are all organized, the way those pixels are coloured, the picture of Richard Nixon emerges from the television.

    But, if you had never heard of Richard Nixon you would not recognize that as a picture of Richard Nixon. At the very best it would be ‘some guy’. And if you’re an alien from another planet, you’re visiting with the people on the space shuttle – I like to go with a theme – then you’re not even sure whether it’s a human or a rock formation, could be anything.

    Emergence requires perception. It requires a perceiver. That is why it is context sensitive and that is why knowledge is context sensitive.

     

    Knowledge Creation and Acquisition in Networks

    Here’s another buzzword: the wisdom of crowds157. What does that mean? Knowledge is distributed. Each one of us is a piece of the puzzle. And we don’t acquire this piece, it’s not like somebody comes to a podium and talks a piece and you’re sitting there and OK you have it. It doesn’t work that way.

    As you sit there, indeed even as this talk is happening, you are not simply acquiring the words that I give you, and I sincerely hope not, though maybe I’ll start reading some Che and see what happens… no, I’m kidding. Right? The stuff’s coming in, but then it meshes and shmooshes with everything else that you’ve got going on, and what happens in your mind is you create something new out of it, and then that new thing becomes another piece of the puzzle, and it gets fed back in. And back and forth it goes. Back and forth over and over again.

    Creation, on this model, is a process of acquisition, you get the input in, the talk, the website, the paper, the television show, the trip through the forest, you remix it, you take a bit here, a bit here, a bit here, a bit here, you put it together, and sometimes in a new arrangement, sometimes in an arrangement you’re comfortable with, you repurpose it, you reshape it, you frame it according to your own background knowledge, your own beliefs, your own understandings of the words. This guy at the front of the room says the word ‘Paris’, you take that word, and shape that, fold that, into a place where it fits in your mind.

    And then you feed it forward. You complain to the organizing committee after the talk. Just kidding. Or if Alan Levine’s in here, he’s probably blogging this. You pass it along. And this process happens over and over again. And each individual person does this, and it creates this network of meaning.

    It’s not simply a physical network. You read people like Barabási158 or Watts159 and they talk a lot about the structures and the structural properties of networks, but what’s interesting and important are the semantical properties of networks, and the semantical properties are what is found, what are found, when we look at these concepts, as they’re being molded, as they’re being passed along, and what emerges from them.

    Hence, for example, we’ve seen this before, in the literature, we’ll go back to the 1970s, Thomas Kuhn, Structure of Scientific Revolutions160. What it is to know in, as he says, normal science, to know, to learn a science, to learn a discipline, says Kuhn, is not to know a whole bunch of facts, but to learn how to solve the problems at the back of the chapter. And as someone who’s struggled with those problems at the back of the chapter, I can tell you, the stuff that you need to solve the problems isn’t in the text that preceded the problems. I have analyzed this.

    More modern: Etienne Wenger161. Learning is participation in a community of practice, and again, this is the same concept here, that’s coming out. This instead of learning as being the acquisition of facts, rather, learning as immersion into an environment. Well your metadata should be like that too.

     

    Properties of Successful Networks

    Properties of successful networks. I like to adapt. And so yesterday we heard Charles Vest162 talking about the three attributes of (a) successful university system, you had that nice list. Of course, I’m sitting there in the back, the very back of the room, sitting there, “yeah, but it’s the Times of London, they have an agenda.” Everything is context, right?

    But anyhow, but the attributes, the attributes were important. The attributes that he identified were right. I think they’re vital, and they’re fundamental, and it’s kind of neat, because I come in and think I’m going to do a talk on principles of representation, and I come in, and I pick up the principles from the opening talk. But that’s how this works. You thought, probably, you thought the talk was static, dynamic, something that existed before it actually happened, but that’s not how it works. When the knowledge is in this network, in this flow, interacting back and forward, it quite literally changes from day to day.

    Charles Vest, three key attributes.

    Diversity. I kind of recast that as ‘many objectives’. He was talking about the different types of institutions, you’ve got your land grants, you’ve got your publics, et cetera. All the different types of institutions, there were many.

    Interwoven. For Charles Vest interwoven is teaching and research. (Note: I was trying a play from a Simpsons episode; “We play all kinds of music: country and western.”) OK, that didn’t come out quite right. But the idea is, you’re not focused on a single thing, you’re not doing just one thing.

    And then crucially, and this is the core of course, behind MIT’s Open Courseware163, and the many other projects that he mentioned, it’s open. The system is open. The network is open. It admits many minds, many points of view. And that openness is what enables the communication and the exchange of concepts and ideas to happen, that creates that network effect.

     

    Diversity. That means, if diversity is true, diversity is a virtue, then its converse, is not. So the idea of making everything the same, making everything of anything the same, is fundamentally misguided. Now, many of you work on something called ‘standards’. Standards is, by definition, the making of everything the same. So we have a tension here.

    Interwoven. The idea that our different activities are distinct is fundamentally misguided. Those of you who took in the talk on what the next net generation expects of us will have caught the flavour of this.

    There is no real distinction between home and work and school and hobbies; it’s all part of a great tapestry, isn’t it? And yet, look at not only how we’ve structured institutions, we’ve got entire buildings dedicated for ‘school’ only, and you sort of scratch your head. If school’s not distinct from work, why is there a separate building for school? And it seems sort of odd.

    Metadata, we have metadata that is like ‘school’ metadata, and then we have other metadata which is ‘work’ metadata, and they will never meet.

     

    Open. The idea that we can store knowledge in closed repositories, and I’m thinking here specifically of things like Learning Content Management Systems (LCMSs), but more generally, of the whole range of institutional repositories that require passwords, authentication, IP checks and blood types in order to get access to – that idea is fundamentally misguided.

    And to illustrate – and that’s why I’m so pleased to come here to a conference like this and look at all the sessions on open source and open software, open content, and I’m beginning to think, it’s great, people are beginning to get this – the argument, in my mind the argument in favour of open content and open software is really very simple: if you picture the network of knowledge as being like the network of neurons in the mind, then barriers, like copyright limitations, password access and all of that, that’s like putting bloockages in the connections between the neurons in your mind. And if that happens to a person, if their neurons stop sending signals freely and openly to each other, we consider that to be very sick, fundamentally ill, in need of major care and treatment and support. It’s not a healthy knowing mind at that point, is it? It certainly not a remembering mind.

     

    The Properties Applied to Metadata

    So anyhow, I did say I’d talk about metadata some time, so what about metadata? I’m going to shift gears a little bit here and take these properties and apply them specifically to metadata.

    The properties, the three properties, that I’ve just described, are not merely properties of universities. Because after all, the basic unit of knowledge is not the university. It’s… well, I was going to say, when I was writing this first was something much smaller, try to come down a little, well, what is the basic unit of knowledge, and I realized, uh oh, I’ve stumbled into philosophy again. So, I’ll dither.

    Here’s the picture that I’ve been trying to draw so far, I don’t know if the different colours come out clearly, but they’re there. Those circles, they’re not all actually the same, it’s just, you try to do a graphic in five minutes, you go for the predefined circle. But think of those circles, they’re all different, they’re all diverse, they’re all autonomous, they’re all doing their own thing, and they’re connected.

    And the knowledge itself consists in the connections between the circles. I’ve got one set of black lines, that represents ‘Idea A’, and another set of red lines, that represents ‘idea B’. That, that’s our knowledge network.

    The same picture applies at all levels. And I know I’m making a very strong claim here, but I believe, I don’t have time to go into all of the detail here on that, I believe there is significant

    empirical evidence to support this. The same principles that govern the interactions between bloggers164 also cover the distribution of rivers165 in a river valley, also cover the way crickets chirp166 in unison. There’s the picture.

    At the lowest level, if you will, there are neurons, but also the interconnection between ideas that I’ve been talking about, interconnection between metadata, interconnection between people, which these days has reached hype status under the heading of ‘social networking’, and then, of course, at the top, the interconnection between universities, the mechanism by which you develop an excellent university system.

    And just for good measure, and I’m not going to linger on this, as you go from the smaller to the larger, you have your causal relationships, but equally importantly, as you go from the larger to the smaller you have your perceptual relationships, that’s the being able to recognize the picture of Richard Nixon, as opposed to, say, having a picture of Richard Nixon be ’caused’, the recognition of Richard Nixon be ’caused’.

    Now, this is drawn in a nice neat line. It is not a nice neat line. I left out all kinds of… I left out crickets, for one thing. I wasn’t sure I should put ideas above metadata or metadata above ideas. I don’t want to convey the idea here that it’s nice neat layers all the way up. It is not; it is a chaotic mess. But, if we abstract it, apply words to it, this is kind of what we get.

    That’s the background into which I approach metadata. Now thinking about metadata, and thinking about the way metadata ought to be organized and structured, I came up with the concept that I call ‘resource profiles’167, I wrote a paper on that a couple of years ago, a couple of people read it, which is nice. And in that paper I described three major features of metadata.

    First of all there are different types of metadata. What very recently we would call microformats. I’ll talk more about all of these. Second, the information in metadata is distributed. And then, third, any given perspective, any given point of view, any given context of recognition, is the result of aggregation, of bringing things in.

    Now just last thing this morning, before I came here, as I was reviewing these, I realized, oh yeah, wait, these are the same principles that I just talked about, so: different types of metadata

    – diversity; information is distributed – interwoven; different perspective is aggregated – open. So there’s a correspondence there, I’m not sure of the significance of that, but it’s certainly matched, at least it matched at seven-thirty this morning.

     

    Learning Object Metadata: Microformats

    So let’s look at learning object metadata168 specifically. I am going to work from the assumption that you are all familiar with learning object metadata.

    All right. Learning object metadata is one of your classic standardization exercises, and when I look at learning object metadata it is the oddest thing in the world to me to see every metadata record having exactly the same structure no matter what kind of learning object is described.

    That just seems wrong. And it seems to me that we lose a lot when we do that.

    If you look at different types of resources, I’ve got a couple of examples here but you can multiply them yourselves, you’ve got a video resource and an audio resource. Now these are two very different types of resources. One will have a bitrate, that’ll be the audio. Another will have a framerate, that’ll be the video. And the video will have a size. And the audio, size makes no sense.

    So, there are, or there ought to be, what we might call LOM microformats. If we have learning object metadata that describes an audio resource, then the metadata appropriate to audio resources ought to be a part of that learning object metadata. If, on the other hand, the resource is, say, an essay, in Microsoft Word, you use a different type of metadata. If it’s a learning object properly so-called, with learning outcomes and activities, the model, then you use different metadata. If the learning object is an opportunity for a one-on-one personal engagement with an online mentor, then you use different metadata. And the different metadata varies, so you have different technical metadata, you have different educational properties, and so on.

    We think of learning object metadata as though it is just one big monolithic format. But in so doing, we, we, not only do we, we mis-shape the descriptions of the objects – look at the technical elements of metadata. Really, you don’t learn anything. Well, you don’t learn much about the technical properties of the resource you’re describing, because we’ve tried to get one size fits all, and we’ve just sort of fudged it.

    Look at rights. “Yes, yes, description.” What kind of rights metadata is that? I mean, it doesn’t work at all, but again, because we’re trying to get one size fits all, we just wipe out the detail and just go for something, oh, you know, this will work for everyone, I guess that’ll do.

    Learning object metadata, too, it just seems odd to me, it’s almost like it’s in this world apart, like I said earlier, it’s ‘school’ metadata. And when we’re thinking of learning object metadata as metadata that could be constructed out of other types of metadata, that draws us to the conclusion that we should see learning object metadata as metadata that is situated in an environment where there are other types of metadata surrounding it. And learning object metadata and these other types of metadata interconnect, interact, and indeed, you would take, say, personal metadata, such as Friend of a Friend169, and actually bring it in to the learning object metadata. Oh we got, we got vcards170 instead. I’ve always been scratching my head over that one, why there’s vcard metadata in learning object metadata.

    Rights metadata: instead of “yes, yes, description” we have rich, expressive languages that could be used to express rights in learning objects. But we have to learn to stop seeing learning object metadata as something separate stand-alone, we have to invent it all from scratch.

    When I think of metadata, I think of RSS171. RSS is beautiful. RSS: title, description, link. You’re done. And then you just add other stuff to it as needed. And learning object metadata has even recreated title, description, link. It has its own special fields for it. Now there have been crosswalks built between learning object metadata and Dublin Core, but I sort of wonder, why didn’t they just take the core of Dublin Core172 and, “we’ll use that.” That’s what RSS does. Need creator metadata? Dublin Core, dc:creator. And you’re done, you didn’t need a special RSS element for creator.

    I want you to think about how limited our conception of what a learning resource could be has become because of the way we’ve shaped our metadata. Picture a learning object in your mind for a moment, of course it’s all different pictures, and ask yourself, how do you represent an event in learning object metadata? Where is the field, ‘start time’? What happens, I mean, you can make it work, well you’re taking these standard fields and you kind of using them to your own purpose. You’re ignoring the ‘real meaning’, properly so-called, of what that field means, you’d probably stuff it in technical resources or something. Yeah, why not? If everybody else who’s getting the metadata knows what you mean it doesn’t really matter what the word said.

    Learning object metadata, as it is structured now, actually collapses our view of what a learning resource can be into this static ‘knowledge as something like a sentence’ picture of learning. But if we break the constraints of vocabulary imposed on us by learning object metadata we also break the conceptual constraints of what a learning object can be. And then it can be a mentoring session. Then it can be a seminar. Then it can be an organization. Now what does an organization look like as a learning object? I don’t know, but I’d like to be able to describe it.

     

    Learning Object Metadata: Distributed Metadata

    We have this thing, learning object repositories, metadata repositories, and we have this picture of the metadata being like the card catalogue173. How many times have you heard that analogy? The other one is the lable on the soup can174. But people love the card catalogue analogy. And so you have each individual record, each individual card describes a resource for us, so when we want to go locate a learning object, we’re going to do just like we do in a library, we go search the card catalogue.

    Most knowledge isn’t organized this way. Think about how we would describe a person in metadata. Think of yourselves as a prospective employer of that person. So, what do you want? Well, you don’t know the person, well, you’re not supposed to anyways, so what you want is person metadata. Which these days is called the c.v. So the c.v.s come in, you’ve got this pile on your desk, that’s all the metadata, now you’re going to go through the search process and try to retrieve the records, the people, that you want for your position.

    The question here is, as a potential employer, are you going to depend completely and exclusively on the c.v. in order to come to conclusions about the attributes of that person? I contend that you would be nuts to do so. And nobody would. At the very minimum, we have interviews so that we can get other data. But typically, we’d do thinks like, we’d run a reference check. I don’t know how it works here, but in Canada we’d check and see if they have a criminal record, we’d run it through that sort of database. I don’t think we do it in Canada, but they may do it here so I put it in, you may check their credit history, to make sure they’re not a bad risk. If they give you a name and an address you might confirm that in a phone book.

    The point here is, what we know about a person is not contained in a single metadata record, and indeed, it’s not contained even in a single location. And that is crucial for our understanding of, our knowledge about, that person.

    And of course, it’s all point of view. A prospective employer is interested in one set of personal metadata, a prospective date is interested in a very different set of metadata. And because I couldn’t resist, I diagrammed that.

    So we have different types of metadata, the classic c.v., which I consider bibliographic metadata, that’s the stuff you were born with. Actually it doesn’t even include name when you’re born, unless your parents planned ahead and did ultrasound or whatever. When I grew up the name came after the birth. But, age, that’s known right from Day One, stuff like that.

    Then you’ll have health metadata, which would be located in the doctor’s office or in the hospital, or I guess down here they’d be, what are they, HMOs? Grades, which would be held at the school, because you’re not going to trust the person to provide an accurate transcript of their grades, because if you did, everybody gets As. The police criminal record, again, you get that directly from the police. The bank, or I guess you have it here too, Equifax, you get the credit information, which is sometimes accurate, sometimes less so. Information about teeth from the dentist.

    Now your employer is going to aggregate this information, bring it in, and remix it, and organize it, in order to form their own view, their own perspective. C.V., grades, health, criminal record. The date doesn’t really care about the c.v., well, most dates don’t. They’re interested in health, criminal record, well they usually do care about that, credit, and so I’m told, teeth. And you could go on. I could make this list much longer and I could come up with different points of view.

    Learning object metadata is the same thing. You have a resource. It is born, created, the fruits of creativity, you know what I mean. And it has a creation date, it has a parent or author, you’ll give it a name, you’ll say it’s a nice learning object, it’s about rockets, and so on. And then it goes out into the world, and as it’s out there in the world, then it begins to acquire different properties. Fred Penner used it in a nature class. Joe Jackson thought it was really good and gave it a rating of 5. The Mennonite Central Committee had a look at it and gave it the approval for LDS classes. The Siskel and Ebert of e-learning gave it two thumbs up.

    In general, we can identity three major types of metadata. First party metadata – metadata created by the author. Bibliographic metadata. Second party metadata. Metadata created by the user of a resource. Evaluation. Context of use. “I used this resource in a math class.” Third party metadata. Metadata created by an observer of some sort. The Mennonite central Committee. The rights broker. The Siskel and Ebert of e-learning. First, second and third party metadata.

    Learning object metadata of the future will be composed of these three types of metadata, and the microformats within these three types of metadata will be mixed and matched, mixed and matched according to the nature of the resource, but mixed and matched according to the perspective, point of view, or context of use of this metadata.

     

    Learning Object Metadata: Referencing

    Think about your metadata environment. Think about your personal metadata. Even think about your c.v., maybe think about it a bit more abstractly, because your c.v. is typically a paper document and has the limitations inherent in physical objects.

    The metadata about you isn’t simply the metadata about you. If you think about it. I live in a house, for example, it’s a nice little house, it’s on a quiet street in Moncton, New Brunswick, in eastern Canada. That house has metadata. That house is older than I am. It had metadata before I did. It has a creation date, which is approximately 80 years ago, it’s not very reliable metadata because that was before they invented metadata. The house has an address, a street address, a lot description number and all of that. It has its its history of owners, its provenance and all of that.

    That metadata describing my house actually exists separately from me, it’s down at City Hall. I, when I give you my metadata, I refer you to my house metadata, typically I’ll just refer you with an address. I’ll simply refer you to where you can get more metadata.

    Same with pets. I got a cat, and the cat came with papers. Cat had its own metadata. Cat’s metadata isn’t my metadata because cat might go away. And I continue. I might give the cat, with its associated metadata, to someone else. Cat might die, in which case I close the file and archive it. Your car, same sort of thing, car has papers.

    An entity does not exist in isolation, it’s not a sentence like ‘Paris is the capital of France.’ An entity is related to other entities. Inherently related. And we need to express this in metadata.

    So I call this ‘metadata referencing’. And other people call it other things, none of this is unique to me, but what it isn’t is in LOM. Now metadata about a given resource is not stored in a single file. And, as you go though say some learning object metadata, from point to point as you refer to different types of resources, instead of embedding the metadata right in there, you simply point, or reference, an external metadata file.

    I’ve proposed this on a number of fronts. I wrote a paper about expressing digital rights in metadata175, and one way of doing it is you take your digital rights, your ODRL file, or your XrML file (I still have trouble saying MPEG-REL) and embed it, the 80 lines, in the description field of the learning object metadata, that’s one way of doing it. And what that means is that if you have a million learning objects, then you have this rights information replicated a million times, and if you want to change your price, you’re in trouble. But if you take your rights metadata and create a rights model, and you put that in a specific spot, I call it a rights broker, and then in your learning object metadata you simply point to the location of your rights metadata.

    And that’s what Creative Commons176 does. Creative Commons, you have a web page, read through the web page, there’s a little Creative Commons logo, and if you look at the source of the page, you’ll see the rights metadata encoded in the page, but what that does is it’s a pointer to the canonical definition of, say, ‘non-commercial share-alike’ on the Creative Commons website. And that’s how it’s done. Now, of course, learning object metadata, we’ve got “yes, yes, description”.

    It’s not just that, the authors of resources, again, we refer to people about half a dozen times in learning object metadata and every time we’ve got this embedded vcard, and I sort of, I sit there and look at these learning object metadata files, and I say well what happens if the person changed jobs and got a new email address? Who’s going to go out and change the 25,000 learning object metadata records to reflect this new information? That makes no sense.

    But if a person had their own metadata record – Friend of a Friend is a popular format, not necessarily the definitive format – then in the learning object metadata you simply have a reference to that person’s metadata, ‘creator: where that person is’. Then a person can change their job, change their address, change their name, and they would not obsolete one learning object metadata file.

    You see this already in RSS, or I should say more accurately, Atom, with the different link elements. atom allows you to have several links associated with a resource, one of the links will be the actual location of the resource, and another link will be a back-up location, and another link will be a resource that the current resource talks about, and so on, they’re all defined in the Atom 1.0 specs177. And you’re beginning to find them in web pages as well. I’ll talk a bit about that shortly.

    So here’s the picture. So pretend that this is learning object metadata, I adapted the vocabulary for my own purposes, so on the learning object website, the name, the description, the location. The author, now the author isn’t a string ‘Stephen Downes’, because that’s not a good way to store that information, the author instead is a pointer to the author website. In my FOAF file. And indeed, I work for a company, biggest one in Canada – well I don’t know if that’s true – but the company, it doesn’t just say ‘National Research Council’, it’s a pointer to the company metadata, describing that company. If I change jobs, I just change that pointer. If my company changes names – it’s a government entity, could happen – then they change their thing, I don’t need to change anything on mine. The rights on the broker website. And so on, I’ve just picked a few things here, but we could expand this.

     

    Two Principles of Distributed Metadata

    This picture gives me two basic principles of distributed metadata. And those of you who are involved in database design should be thinking ‘normalization’. Those of you who are not involved in database design may want to Google the concept; this is not original to me.

    1. Metadata – and put in the caveat, where possible – metadata for any given entity should not be stored in more than one place. There should be one canonical location for my name. And that’s on my website. Not your websites, those of you are university people. It’s on my website, because it’s my name. And that’s the only place it’s stored.Now it can be mirrored, it can be reflected, because you’re thinking about database design, you don’t want to be doing lookups across the entire internet every time you go to see a record. So you pull this information in, you mirror it on your own site, sure, no problem.But the canonical information is stored on my website and from time to time you aggregate my information, you bring it in, just to make sure that your information still coincides with my information. Now for mission critical information you’d be aggregating a lot, and for bibliographical information you might do it once a month.And the reason for that simply is data integrity. You multiply the location of a piece of data, say, my name, you multiply the possibility for errors. My name is spelled ‘Stephen Downes’. I can give you eight different ways of getting that wrong. And they’re always got wrong. Steven with av. Downs without my e. Sometimes they do both. I’ve had ‘Stephe’. And so on. And some of them I do myself, typing my name in all these fields all the time.
    2. The second principle, and this is the one that I think is most violated by LOM, metadata for a given entity should not (except as a mirror, cache or whatever) contain metadata for a second entity. We need to keep our entities straight and have separate metadata for the different entities. Now if you think about it, it’s going to give us a lot more expressive power because it is going to allow us – how do I want to describe this? – it allows us to do, for example, much more finely grained searches.

    I did a paper called The Semantic Social Network178 where I talked about some of these principles, and the idea is, you have social networks which is, you have a person, they list all their friends, and then you have content metadata, like RSS where you describe all your blog posts or your essays or whatever, and right now these are two separate things. But if you merge them together, that puts friends together with content, as I put in my newsletter the other day, my social network is my content network, they’re one and the same thing. They just have different types of entities.

    So, I could in principle, if I was a better software author, do a search, ‘Find all the papers written by people who are friends of David Wiley.’ Now, why would I do this? Well, I don’t know. What if I narrow it down? ‘Find all the papers on learning objects written by people who are friends of David Wiley.’ That is going to give me, I would bet, an authoritative collection of papers on learning objects, because I know David is an authority on learning objects. His friends are probably also authorities. At least those who write about learning objects.

    So you get that kind of – I’m looking for the word there – multi-type entity search capacity. Trying to come up with a phrase off the top of my head, it’s always a bit hard.

     

    Web 2.0: The Principles More Widely Applied

    What’s important now, remember all my layers, these principles apply not just to metadata. They apply to learning resources themselves. We now have this picture of learning resources in our mind of, well, it is like a can of soup and you stick it in the back shelf and you pull it out when you want it. But it’s not like that. The learning resource itself is distributed, itself brings in different types of entities.

    It applies to applications themselves. Now I’m not talking, like, Java and all that sort of thing but I’m talking more along the lines of separate free-standing applications loosely connected through communication channels, not integrated into one large piece of enterprise software.

    The web is changing, and it’s changing in this very direction. You may have heard the concept ‘Web 2.0179‘. That’s not just a slogan. It’s a shifting of the idea of the web from being a medium to the idea of the web as a platform, or if you will, an environment. It just is the shift from the idea of the web being communications, like in that old picture of knowledge, to an environment, or a network, or pick your own metaphor, where you’re not just dealing with content, you’re actually immersed in it, part of it. It becomes a place where you do things, it becomes even a place where you live.

    E-learning 2.0 – I’ve got a whole other slide show on e-learning 2.0180. Here’s the picture. It isn’t my picture, Scott Wilson181 did the original and Dave Tosh182 has done more. The idea of the future virtual learning environment, that’s your space, and then, you are connected to all these applications, to all this content, to all this data, to all this metadata, around the web.

    Those of you – because I’ve witnessed this – most of you, all of you, are working on university- centric systems. E-Learning 2.0 is not university-centric. E-learning 2.0 is where you’re one of those bubbles, you’re part of the student, the person’s overall learning environment, and your metadata, and your interactions, your identity sign-ons, have to play nice with all of these other applications, not just other universities, but newspapers, blogging sites, dating sites. Different points of view. Or as I’ve got here, Flickr183 photo sites.

    Learning becomes a network phenomenon. It becomes not just a place where we receive the service or the content of learning, but it becomes an interactive back and forward network environment, where everybody’s receiving and everybody’s creating, everybody’s remixing. We see social networks and communities, and as I’ve talked about before, the semantic social network. Networks of interactions. The personal learning centre.

    We’re beginning to see this already in Web 2.0. There’s a link there at the bottom, microformats.org184, where these microformats are beginning to be developed for embedding in an XHTML file. So they;ve got things like hcalendar, hcard, rel license, a whole bunch of things. There’s a new one just came in the other day for video185. And these microformats are embedded in web pages, or, but in the future, because this is just an XMTML initiative, but in the future they’ll be embedded in RSS and other types of XML metadata as well.

    The Web 2.0 checklist186. This is another take on the principles of distributed metadata. Structured microcontent, like I described. The data is outside. It comes in through the interactions. The bits of the network – it’s not all one big monolithic piece of software like which is running on my computer, but different, small pieces of software that talk to each other in application-specific and resource-specific microformats and APIs. That’s why you get the Flickr API187. That’s why you get the Google Maps API188. And you use these APIs the way you use media-specific metadata. The single identity, the single placed for that personal thing that – I drafted a proposal189 on that, it’s at that URL there. User-generated, user-managed content, applications, network as a whole.

    Michael Feldstein190 yesterday wrote, and I quite agree with this, “We need a system that is optimized toward slotting in new pieces as they become available, not as an after-thought or an add-on, but as a fundamental characteristic of the system.” Try doing that with Blackboard or WebCT.

     

    Concluding Remarks

    The take-away. And I am going to come in under time. Charles Vest talked yesterday about the meta-university. If I may be so audacious, this – what I’ve described here – is the information architecture for the meta-university. Now you might not agree with all of the details and everything, but it is going to be very much like that, and it is going to be very much like that because, really, that’s the only way to do it. The key here is not large integrated systems but small flexible bits that are interconnected. And that’s true of applications, it’s true of content – like websites, pictures, images, graphics,sound – and it’s true of metadata.

    And that leads us to this. Learning object metadata will be rewritten. Or maybe bypassed entirely. That’s a prediction. I’ll stake my reputation as a pundit on it. It’s going to be rewritten. And it’s going to be rewritten because it has to be, because as we work with learning objevct metadata as it is currently incarnated, unless we’re working within a large monolithic entity like the U.S. military, learning object metadata will be found to be too rigid, too inflexible, too narrowly defined, to do the sorts of things that we want to do with it.

    And instead, we’re going to get the type of learning object metadata that will be similar to – although, I know these committees, so it will be different from – the resource profiles that I’ve described here, where it will bring in the different types of microformats, where metadata will be distributed, will do things like harvest second-party and third-party metadata.

    And that is my last slide, I thank you very much for inviting me, it has been a pleasure, and I really appreciate you staying for the whole talk. Thank you very much.

    Footnotes

    134 EDUCAUSE. Website. Accessed August 9, 2005. http://www.educause.edu/

    135 Seminars on Academic Computing. August 7-10. EDUCAUSE. Conference website. http://www.educause.edu/DirectorsLeadershipSeminar/6222

    136 Snowmass Village. Website. http://www.snowmassvillage.com/

    137 Wikipedia. Che Guevara. Accessed August 13, 2005. http://en.wikipedia.org/wiki/Che_Guevara

    138 Guevara, Ernesto (Che). The Motorcycle Diaries : A Latin American Journey. Ocean Press (September 15, 2004). http://www.amazon.com/exec/obidos/tg/detail/-/1876175702/102-7570594-3413754?v=glance

    139 Chomsky, Noam. Syntactic Structures. Walter De Gruyter Inc; Reprint edition (June, 1978). http://www.amazon.com/exec/obidos/tg/detail/-

    /3110154129/102-7570594-3413754?v=glance

    140 Fodor, Jerry A. The Language of Thought. Harvard University Press (January 1, 1980). http://www.amazon.com/exec/obidos/tg/detail/-

    /0674510305/102-7570594-3413754?v=glance

    141 Google. Search results. “Bridging the Gap” conference. August 13, 2005. http://www.google.ca/search?q=%22bridging+the+gap%22+conference

    142 Wikipedia. Checksum. Accessed August 13, 2005. http://en.wikipedia.org/wiki/Checksum

    143 Moore, Michael G. Distance Education Theory. The American Journal of Distance Education. Volume 5, Number 3, 1991. http://www.ajde.com/Contents/vol5_3.htm

    144 World Wide Web Consortium. Resource Description Framework (RDF). October 21, 2004. http://www.w3.org/RDF/

    145 Wittgenstein, Ludwig. Philosophical Investigations. G.E.M. Anscombe, trans. Prentice Hall; 3rd edition (1999). http://www.amazon.com/exec/obidos/tg/detail/-/0024288101/102-7570594-3413754?v=glance

    146 Quine, W.V.O. Word and Object. The MIT Press (March 15, 1964). http://www.amazon.com/exec/obidos/tg/detail/-/0262670011/102- 7570594-3413754?v=glance

    147 van Fraassen, Bas C. The Scientific Image. Oxford University Press (January 1, 1982). http://www.amazon.com/exec/obidos/tg/detail/-

    /0198244274/102-7570594-3413754?v=glance

    148 Hanson, Norwood Russell. Patterns of Discovery: An Inquiry into the Conceptual Foundations of Science. Cambridge University Press (January 1, 1958). http://www.amazon.com/exec/obidos/tg/detail/-/0521051975/102-7570594-3413754?v=glance

    149 Lakoff, George. Women, Fire, and Dangerous Things. University Of Chicago Press; Reprint edition (April 15, 1990). http://www.amazon.com/exec/obidos/tg/detail/-/0226468046/102-7570594-3413754?v=glance

    150 Stalnaker, Robert C. Inquiry. The MIT Press (March 13, 1987). http://www.amazon.com/exec/obidos/tg/detail/-

    /0262691132/ref=pd_sim_b_6/102-7570594-3413754?%5Fencoding=UTF8&v=glance

    151 Lewis, David K. Counterfactuals. Blackwell Publishers (December 1, 2000). http://www.amazon.com/exec/obidos/tg/detail/-

    /0631224254/ref=pd_sim_b_2/102-7570594-3413754?%5Fencoding=UTF8&v=glance

    154 Quine, W.V.O. and Ullian, J.S. The Web of Belief. McGraw-Hill Humanities/Social Sciences/Languages; 2nd edition (February 1, 1978). http://www.amazon.com/gp/product/0075536099/102-7570594-3413754?%5Fencoding=UTF8&v=glance

    155 Fergus, Paul, et.al. Capturing Tacit Knowledge in P2P Networks. PGNet 2003. Accessed April 30, 2012. http://www.cms.livjm.ac.uk/pgnet2003/submissions/Paper-27.pdf

    156 Johnson, Steven. Emergence: The Connected Lives of Ants, Brains, Cities, and Software. Scribner (September 19, 2001). http://www.amazon.com/exec/obidos/tg/detail/-/068486875X/qid=1123967137/sr=1-2/ref=sr_1_2/102-7570594-3413754?v=glance&s=books

    157 Surowiecki, James. The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations. Doubleday (May 25, 2004). http://www.amazon.com/exec/obidos/tg/detail/-/0385503865/102-7570594- 3413754?v=glance

    158 Barabási, Alberto-Laszlo. Linked: The New Science of Networks. Perseus Books Group; 1st edition (May, 2002). http://www.amazon.com/exec/obidos/tg/detail/-/0738206679/102-7570594-3413754?v=glance

    159 Watts, Duncan J. Six Degrees: The Science of a Connected Age. W. W. Norton & Company (February, 2003). http://www.amazon.com/exec/obidos/tg/detail/-/0393041425/102-7570594-3413754?v=glance

    160 Kuhn, Thomas S. The Structure of Scientific Revolutions. University Of Chicago Press; 3rd edition (December 15, 1996). http://www.amazon.com/exec/obidos/tg/detail/-/0226458083/102-7570594-3413754?v=glance

    161 Wenger, Etienne. Communities of Practice: Learning, Meaning, and Identity. Cambridge University Press (December 1, 1999). http://www.amazon.com/exec/obidos/tg/detail/-/0521663636/qid=1123967868/sr=1-1/ref=sr_1_1/102-7570594-3413754?v=glance&s=books

    162 Vest, Charles M. Claire Maple Address: OpenCourseWare and the Emerging Global Meta University. Notes by Alan Levine. Seminars on Academic Computing, August 8, 2005. http://jade.mcli.dist.maricopa.edu/cdb/2005/08/08/opencourseware-sac2005

    163 MIT OpenCourseWare. Website. Massachusetts Institute of Technology. August 13, 2005. http://ocw.mit.edu/index.html

    164 Bryant, Lee. Smarter, Simpler Social: An introduction to online social software methodology. Headshift, April 18, 2003. No longer extant. Original URL: http://www.headshift.com/moments/archive/sss2.html

    165 Guimera, R., et.al. Self-similar community structure in a network of human interactions. Physical Review E, vol. 68, 065103(R), (2003).

    166 Buchanan, Mark. Nexus: Small Worlds and the Groundbreaking Theory of Networks. W. W. Norton & Company (June, 2003). http://www.amazon.com/exec/obidos/tg/detail/-/0393324427/102-7570594-3413754?v=glance

    167 Downes, Stephen. Resource Profiles. Journal of Interactive Media in Education, 2004 (5). http://www.downes.ca/files/resource_profiles.htm

    168 IEEE. Draft Standard for Learning Object Metadata. 1484.12.1-2002, 15 July 2002. http://ltsc.ieee.org/wg12/files/LOM_1484_12_1_v1_Final_Draft.pdf

    169 Friend of a Friend. The foaf project. Website. August 13, 2005. http://www.foaf-project.org/

    170 Internet Mail Consortium. vCard: The Electronic Business Card. January 1, 1997. http://www.imc.org/pdi/vcardwhite.html

    171 Wikipedia. RSS (file format). August 13, 2005. http://en.wikipedia.org/wiki/RSS_(protocol)

    172 Dublin Core Metadata Initiative. Website. http://dublincore.org/

    173 Richards, Griff, et.al. The Evolution of Learning Object Repository Technologies: Portals for On-line Objects for Learning. Journal of Distance Education. Vol. 17, No 3, 67-79, 2003. http://cade.icaap.org/vol17.3/richards.pdf

    174 Landon, Bruce, and Robson, Robby. Technical Issues in Systems for WWW-Based Course Support. International Journal of Educational Telecommunications, 1999, 5(4), 437-453. http://www.eduworks.net/robby/papers/technical_issues.pdf

    175 Downes, Stephen, et.al. Distributed Digital Rights Management: The EduSource Approach to DRM. The Open Digital Rights Language Initiative – Workshop 2004. http://www.downes.ca/files/DDRM_19April2004.pdf

    176 Creative Commons. Website. Accessed August 14, 2005. http://creativecommons.org/

    177 Nottingham, M., and Sayre, R. The Atom Syndication Format. IETF, July 11, 2005. http://www.ietf.org/internet-drafts/draft-ietf-atompub-format-10.txt

    178 Downes, Stephen. The Semantic Social Network. Stephen’s Web. February 14, 2004. http://www.downes.ca/post/46

    179 Wikipedia. Web 2.0. Accessed August 14, 2005. http://en.wikipedia.org/wiki/Web_2.0

    180 Downes, Stephen. E-Learning 2.0 – Alberta Cut. ADETA, June 10, 2005. http://www.downes.ca/files/edmonton.ppt

    181 Wilson, Scott. Future VLE – The Visual Version. Scott’s Workblog. January 25, 2005. http://www.cetis.ac.uk/members/scott/entries/20050125170206

    182 Tosh, Dave. A concept diagram for the Personal Learning Landscape. ERADC. April 08, 2005. http://elgg.net/dtosh/weblog/398.html

    183 Flickr. Website. Accessed August 14, 2005. http://www.flickr.com/

    184 microformats.org. Website. Accessed August 14, 2005. http://microformats.org/

    185 Rein, Lisa. Getting Started On A Harmonized Video Metadata Model. Microformats Wiki. August 6, 2005. http://microformats.org/wiki/video-metadata-model

    186 Leene, Arnaud. Web 2.0 checklist 2.0. Hovering Above. July 21, 2005. http://www.sivas.com/aleene/microcontent/index.php?id=P2205

    187 Flickr. Flickr API Documentation. Website. August 14, 2005. http://www.flickr.com/services/api/

    188 Google. Google Maps API. Website. August 14, 2005. http://www.google.com/apis/maps/

    189 Downes, Stephen. mIDm – Self-Identification the World Wide Web. Stephen’s Web. May 4, 2005. http://www.downes.ca/idme.htm

    190 Feldstein, Michael. The Long Tail of Learning Applications. E-Literate. August 7, 2005. http://mfeldstein.com/index.php/the_long_tail_of_learning_applications

    Snowmass, Colorado, August 12, 2005

     

    When Words Lose Meaning

    In which I explain what I meant by my comment to this post191 from Doug Johnson. I commented, “If the word is not the thing, how do you evaluate the sentence ‘Dragons are green?'”

    It’s probably foundational for semiotics that the word is a sign or symbol, and in some way stands for or represents something else. This separation allows us to meaningfully use words like ‘red’ without particularly worrying about the reality of whatever they represent.

    But the question of whether essence implies existence shaped much of 20th century philosophy. What do you say about the meaning of words that represent or refer to things that don’t exist? If the meaning of the word ‘dragon’ does not depend on representation of or reference to dragons – since there are no dragons – then where does it get its meaning?

    You might say that ‘dragon’ is just a fictional example, that we don’t need to worry about its meaning, it’s just metaphorical. But what about a sentence like (to use Bertrand Russell’s famous example) “Brakeless trains are dangerous.” It’s not fiction, it is, moreover, true, and known to be true, and yet (by virtue of that very fact!) there are no brakeless trains.

    So, while it’s simple and appealing to say, “The symbol is NOT the thing symbolized; the word is NOT the thing; the map is NOT the territory it stands for,” there are important senses in which it’s not true. In some important senses, the thing is the thing symbolized. When we talk about ‘tiger’ we are in fact talking about the concept ‘tiger’, which is just what is contained in the word ‘tiger’, and not about things in the world at all. When we talk about ‘the tiger’ we are (Russell would say) making two claims: that there is a thing that exists, and that it is an instance of this concept we call ‘tiger’. All the referring happens in the word ‘the’, not the word ‘tiger’.

    You might think, this is all meaningless babble. Who cares? But it has a direct and immediate impact on how we think about learning. On the simple picture, you just show people some tigers (or trains, or dragons) and they learn about them. Or (since that’s very inconvenient) you simply give them a series of propositions about tigers, trains and dragons (“dragons are green”, “tigers are orange”, etc.) and that teaches you about the world. Except – it doesn’t. It teaches you about language. Most of what we learn about in school is language, not reality. Math – science – these are all disciplines of language.

    In a very real sense, a traditional (text-based, languages lased) education is an education based on fiction. Very useful fiction, to be sure, since most other people are willing participants in that fiction, and it helps us do useful things. But it renders us unusually vulnerable to propaganda and media, since we can convince people of some reality merely through the use of words – actual evidence or experience is not required. We buy into beliefs like ‘the world is described by numbers’, ‘if it can’t be measured it can’t be managed’ and other variations on the old positivist principle of meaningfulness.

    Most of the work in late 20th century philosophy goes to show that meaning and truth are embedded in the representational system – that, in other words, the word is the thing the word describes, the map is the terrain (if you don’t believe me, try walking across an international border). van Fraassen on how explanations in science are descriptive mostly of our expectations. Derrida on how the meaning of the word is based largely on the range of alternative possible words that could be used. Quine on how translations are based on guesses (or what he called ‘analytic hypotheses’).

    None of this implies that there is no reality, that there is no physical world, that there is no experience. Of course there is a great deal of all three. It’s just that the supposedly privileged connection between word and reality – the one represented by ‘The symbol is NOT the thing symbolized’ – is an illusion. And that these representational and referential systems are elaborate fictions.

    This is not new knowledge. It is very old knowledge. And as the Taoists used to say, knowing that these distinctions we find in language represent our interpretations of the world, represent our projections onto the world, is very powerful. Very enlightening. Because it frees us from the absolutes we believed ruled us with an iron grip. What people thought were right and wrong, for example (which is why we can make sense of how something that was once ‘right’ – slavery, say- is now ‘wrong’). What people thought were plants or animals. Sentient or senseless. Planets or non-planets.

    This is not an endorsement of relativism. It is merely the assertion that what is represented in language is fiction. If we rely solely on language – solely on what were told – then anything can be true. Look what happens to viewers of Fox News! What it tells us is that we cannot rely on words, on language, on mathematics, on representational systems. We have to, in our own lives, appeal to our own experiences, our own connection with the world itself. The Taoists would say we have to connect to ‘The Way’ – the ineffable reality behind human descriptions. But it’s not an appeal to the mystical. It’s an appeal to the world that lies beyond our descriptions of the world.

    In an important sense, then, I want to say that semiotics is wrong. Not in the sense that it is descriptively false – for no doubt there is a truth (or, as experience shows, many, many truths) in semiotic accounts of meaning and representation. But rather, that semiotics as epistemology, or even ontology, are false. That there is no actual relation of reference or representation, only (within the referential or representative system) a fiction of one.

    In a sense, we’re at the same position today that Descartes was at in 1616 when he said, “I entirely abandoned the study of letters.” At that time, knowledge, philosophy and science were in the hands of the Scholastics, who understood the world through finer and finer distinctions and relations between the categories. Descartes decided – and proved, through his sceptical argument – that theirs was a world of fiction, that we would not understand the nature of reality by dividing things over and over again into increasingly arbitrary categories. Descartes (and his contemporaries, for this was a broad social movement) derived an analytical method of dividing the world into parts, and using mathematics, not qualities, to represent this fundamental nature.

    Now we understand that mathematics is yet another kind of language. We understand that merely measuring the world is to produce a kind of fiction. Though, to be sure, there are many Scholastics in today’s world who are like the doctors of medieval times, shuffling their figures in finer and finer dimensions to articulate very precisely one fiction after another. And now a lot of people are pointing to networks or connections (etc.) as the new underlying description of reality. But we ought to know by now that networks, too, are a form of fiction, that they are our imposition of this or that order on our perceptions, experiences and reality.

    When we teach, while it is our job to ensure that our students are well versed in the fictions of the day, for they’ll need them in order to socialize and make a living, it is our obligation to ensure that our students are not entrapped by these fictions, that they have it within themselves to touch their own reality, their own physicality, their own experience.

    Footnotes

    191 Doug Johnson. When words lose meaning. The Blue Skunk Blog (weblog), January 21, 2011. http://doug-johnson.squarespace.com/blue- skunk-blog/2011/1/21/when-words-lose-meaning.html

    Moncton, January 24, 2011

    Concepts and the Brain

    Posted to IFETS, June 19, 2007.

    From: Gary Woodill

    One reference that supports that contention that concepts are instantiated in the brain is Manfred Spitzer’s book The Mind within the Net: Models of Learning, Thinking, and Acting. Spitzer spells out how this takes place. For a brief review of this book see my April 10, 2007 blog entry entitled The Importance of Learning Slowly.192

    The Synaptic Self: How our brains become who we are by Joseph LeDoux covers much of the same ground. Nobel laureate Eric Kandel outlines a model of how learning is recorded in the brain in his easy to read In Search of Memory: the Emergence of a New Science of Man.

    I second these points and especially the recommendation of The Synaptic Self, which is a heady yet cogent description of the mind as (partially structured) neural network. Readers interested in the computational theory behind neural networks are recommended Rumelhart and McClelland’s two volume Parallel Distributed Processing.

    That said, the statement ‘concepts are instantiated in the brain’ depends crucially on what we take concepts to be. Typically we think of a concept as the idea expressed by a sentence, phrase, or proposition. But if so, then there are some concepts (argue opponents of connectionism) that cannot be instantiated in the brain (at least, not in a brain thought of as essentially (and only) neural networks).

    For example, consider concepts expressing universal principles, such as 2+2=4. While we can represent the individual elements of this concept, and even the statement that expresses it, in a neural network, what we cannot express is what we know about this statement, that it is universally true, that it is true not only now and in the past and the future, but in all possible worlds, that it is a logical necessity. Neural networks acquire concepts through the mechanisms of association, but association only produces contingent, and not necessary, propositional knowledge.

    There are two responses to this position. Either we can say that associationist mechanisms do enable the knowledge of universals, or the concepts that we traditionally depict as universals are not in fact as we depict them. The former response runs up against the problem of induction, and is (I would say) generally thought to be not solvable.

    The latter response, and the response that I would mostly endorse, is that what we call ‘universals’ (and, indeed, a class of related concepts) are most properly thought of as fictions, that is to say, the sentences expressing the proposition are shorthand for masses of empirical data, and do not actually represent what their words connote, do not actually represent universal or necessary truths. Such is the approach taken by David Hume, in his account of custom and habit, by John Stuart Mill, in his treatment of universals, even by Nelson Goodman, in his ‘dissolution’ of the problem of induction by means of ‘projectability’.

    If we regard the meanings of words as fixed and accurate, therefore, and if we regard concepts to be the idea expressed by those words, then concepts cannot be instantiated in the brain, at least, not in a brain thought of as a neural network. If we allow, however, that some words do not mean what we take them to mean, that they are in fact ‘fictions’ (even if sometimes taken to be ‘fact’) then concepts can be instantiated in neural networks.

    Footnotes

    192 Gary Woodill. The Importance of Learning Slowly. Brandon Hall Blogs. April 10, 2007. http://www.garywoodill.com/2007/04/the- importance-of-learning-slowly/ Original link (no longer extant): http://www.brandon-hall.com/weblogs/garywoodill.htm

    Moncton, June 19, 2007

     

    On Thinking Without Language

    Responding to Dave Pollard:193

    I have long written on the topic of subsymbolic communication and reasoning. So I think you strike a note here. But it could be more sharply hit:

    You write, “What is important is that they are effective, workable, successful. Not necessarily the best decisions, but good decisions. These decisions are the result of intellectual, emotional, sensory/somatic (body) and intuitive knowledge (to use the Jungian model) and integrate the conscious and unconscious.”

    I think that decisions based on subsymbolic reasoning are the best, and not decisions that are merely good enough.

    There is a mechanism that describes subsymbolic reasoning. You suggest that the mechanism is “the result of intellectual, emotional, sensory/somatic (body) and intuitive knowledge (to use the Jungian model).” I think you’re flailing here.

    Subsymbolic decisions and subsymbolic reasoning generally are the result of the experience of perceptual processes (which is where we get emotional, sensory and somatic influences).

    In a nutshell, it is the association of these experiences with previous experiences. Any experience, any perception, is the activation of millions of neural cells. These activations may, depending on the experience, include characteristic patterns of activation. It is the matching of these patterns that constitutes the basis for reasoning.

    These patterns may reflect any sort of perception – sights and sounds, music, animals, forms and faces. We may associate characteristic sounds with them – these characteristic sounds – words – are also patterns. But for many of our habitual experiences, there are no words. They are ineffable.

    Patterns are created from perception through a process of abstraction – we filter our perception, taking in some aspects, discarding the rest. Formal reasoning is this process taken to a great degree – it is abstraction of abstraction of abstraction. Eventually we arrive at ‘pure’ concepts – things like conjunction, entailment, existence, being – which form the basis for formal reasoning.

    These concepts are extremely powerful, but their power is gained at the price if the abstraction. They express broad sweeping truths, but very little about the here and now.

    The reasoning of the master is a subtle dance between these two extremes, between the concrete and the universal, a waltz through the layers of abstraction, drawing subtly on each as it applies to the situation at hand.

    Footnotes

    193 Dave Pollard. Thinking Without Language. How to Save the World (weblog). January 9, 2007. http://howtosavetheworld.ca/2007/01/09/ Original link on Salon blogs (no longer extant): http://blogs.salon.com/0002007/2007/01/09.html#a1747

    Moncton, January 10, 2007

     

    Planets

    When I was young I was told there were six colours in the spectrum (I even learned a little song that names them).

    Red and orange, green and blue Shining yellow, purple too

    All the colours that you know Live up in a rainbow

    Now I’m told there are seven – they added indigo somewhere along the line.

    I have refused to accept indigo. So far as I’m concerned, there are still six colours in the spectrum.

    Now they are telling me that Pluto is not a planet. Again, I refuse to accept that. So far as I am concerned, Pluto is a planet (and so are Ceres, Xena and Sedna).

    Sure, there are authorities that will tell me that there are seven colours in the spectrum and eight planets in the solar system. But on what basis am I required to accept their definition?

    I have concluded: none. If I decide there are six colours, or twelve planets, that’s up to me. And

    – my take is – there’s no reason why society can’t allow both.

    It is the idea that there is only one distinct number of colours, or number of planets, that is wrong, and not any particular list of them.

    Try it. Try thinking this way. It is incredibly, extraordinarily, liberating.

     

    Moncton, August 26, 2006

     

    ‌Types of Meaning

    I don’t want to spend a whole lot of time on this, but I do want to take enough time to be clear that there are, unambiguously, numerous types of meaning.

    Why is this important? When we talk about teaching and learning, we are often talking about meaning. Consider the classic constructivist activity of ‘making meaning’, for example. Or even the concept of ‘content’, which is (ostensibly) the ‘meaning’ of whatever it is that a student is being taught.

    What are we to make of such theorizing in the light of the numerous ways that words, sentences, ideas and constructs can have meaning? What does ‘making meaning’ mean we we

    consider the range between logical, semantical, and functional meaning?

    The idea – often so central to transmission and transactional theorists of learning, that a word or sentence can have a single meaning, or a ‘shared meaning’, is tested to the extreme by an examination of the nature and constitution of that putative meaning.

    In any case, it is always better to show than to argue. Herewith, a bit of an account of some of the many different types of meaning:

    Literal meaning – the sentence means what it says. Also known as ‘utterance’ meaning (Griffiths).

    Logical meaning – the meaning of the sentence is determined by (is a part of) a set of logical inferences, such as composition, subordination, etc. Also called ‘taxis’. (Kies)

    Denotative meaning – the sentence means what it is about. The ‘reference’ of a sentence, as opposed to its ‘sense’. (Frege)

    Sematical meaning – meaning is truth (Tarski – ‘snow is white’ is true iff snow is white)

    Positivist meaning – the sentence means what it says that can be empirically confirmed or falsified (Ayer, Carnap, Schlick)

    Pragmatic meaning – the relationship between signs and their users. (Morris) Includes “identificational meaning, expressive meaning, associative meaning, social meaning, and imperative meaning.” (Lunwen)

    Intentional meaning – the sentence means what the author intended it to say. Also known as “sender’s meaning” (Griffiths). – John Searle, often includes conversational implicatures

    Connotative meaning – the sentence means what readers think about when they read it. Sometimes known as ‘sense’ (Frege). Also sometimes thought of as ‘associative’ meaning. (Morris) Includes ‘reflected’ meaning (what is communicated through association with another sense of the same expression, Leech) and collocative meaning (Leech)

    Social meaning – “what is communicated of the social circumstances of language use” (from Leech; Lunwen)

    Metaphorical meaning – the meaning is determined by metaphor, and not actual reference

    Emotive meaning – related to connotative – the type of emotion the sentence invokes

    Functional meaning – the sentence means what it is used for, what it does (Wittgenstein, meaning is use; Austin, speech acts). The ‘mode’ of a sentence is the function it plays in channeling communication – what degree of feedback it elicits, for example, of what degree of abstraction it considers. (Cope and Kalantzis)

    Type meaning – the sentence’s meaning is related to what it doesn’t say, to the range of possible words or sentences that could be said instead (Derrida). Gillett writes, “Part of the meaning of a word is its ‘register’. Which types of language is the word used in: letters or reports, spoken or written, biology or business etc?”

    Deictic meaning – meaning is determined with reference to the situation or context in which the word is used. Griffiths writes, “Deixis is pervasive in languages.” Common deixic frames include common understandings related to people )’the boss’), time (‘tomorrow’), place (‘nearby’), participants (‘his’), even discourse itself (‘this’ article).

    Relevance, significance or value – “what is the meaning of life?”

    Accent – the manner in which the word is pronounced or emphasized can cnage its meaning.

    Intralingual meaning – (Morris) intralingual meaning (the relationship between different signs; it includes phonological meaning, graphemic meaning, morphological or lexemic meaning, syntactic meaning, and discoursal or textual meaning).

    Thematic meaning – “what is communicated by the way in which the message is organized in terms of order and emphasis” (Leech; Lunwen)

    Some links:

    • Learning Vocabulary: Dealing With Meaning,194 from Using English for Academic Purposes,195 Andy Gillett, School of Combined Studies, University of Hertfordshire

    Hatfield, UK.

    • An Introduction to English Semantics and Pragmatics196 Patrick Griffiths. Powers of Literacy, 197 Bill Cope and Mary Kalantzis
    • Strange Attractors of Meaning,198 Vladimir Dimitriv
    • The Grammatical Foundations of Style,199 Daniel Kies, Department of English, College of DuPage
    • Foundations of the Theory of Signs,200 Charles W. Morris
    • Seven Types of Meaning, Geoffrey Leech, in Semantics,201 pp. 10-27.

    Footnotes

    194 Andy Gillett. Learning vocabulary: Dealing with meaning. Using English for Academic Purposes. School of Combined Studies, University of Hertfordshire. Accessed January 9, 2009. http://www.uefap.com/vocab/learn/meaning.htm

    195 Andy Gillett. Using English for Academic Purposes. School of Combined Studies, University of Hertfordshire . Accessed January 9, 2009. http://www.uefap.com/index.htm

    196 Patrick Griffiths. Section 1.2: Types of Meaning. An Introduction to English Semantics and Pragmatics. Edinburgh University Press. August 22, 2006 http://bit.ly/H8XRLk

    197 Bill Cope and Mary Kalantzis. The Powers of Literacy. Page 144. The Falmer Press. 1993. http://bit.ly/HAua6G

    198 Vladimir Dimitrov. Strange Attractors of Meaning. Centre for Systemic Development, University of Western Sydney, Richmond. 2000. http://www.zulenet.com/VladimirDimitrov/pages/SAM.html

    199 Daniel Kies. The Grammatical Foundations of Style. Department of English, College of DuPage, 1995. http://papyr.com/hypertextbooks/grammar/style.htm

    200 Charles W. Morris. Foundations of the Theory of Signs (International Encyclopaedia of Unified Sciences). University of Chicago Press. December, 1938. http://www.amazon.co.uk/Foundations-International-Encyclopaedia-Unified-Sciences/dp/0226575772

    201 Geoffrey Leech. Seven Types of Meaning. In Geoffrey Leech, Semantics.Pelican. June 1, 1974. http://www.amazon.com/Semantics-Meaning- Pelican-Geoffrey-Leech/dp/0140216944

     

    Moncton, January 9, 2009

     

     

    Naming Does Not Necessitate Existence

    Responding to Learning is Scaffolded Construction202 by Mark H. Bickhard.

    OK, the core of the argument is here. Everything before it leads to it, and everything after follows from it:

    Encoding models can tempt the presupposition of a passive mind: neither the wax nor the transducing retina need to be endogenously active. But there is no such temptation regarding interaction systems. The world could not impress a competent interaction system into a passive mind. Interaction systems must be constructed. Pragmatism forces constructivism.

    Furthermore, unless we assume that the organism already knows which constructions will succeed, these constructions must be tried out and removed or modified if they are not correct. Pragmatism forces a variation and selection constructivism: an evolutionary epistemology (Campbell, 1974).

    Now how could we get ourselves into such a situation? The answer lies in the presuppositions that led to this point. Specifically:

    A theory of encoding is, therefore, what we need to complete the bridge between … semantics and the computational story about thinking. … [An account of] encoding [is] pie in the sky so far. … we haven’t got a ghost of a Naturalistic theory about [encoding]. Fodor, 1987, pg. 81

    and

    The right questions are: “How do mental representations represent?” and “How are we to reconcile atomism about the individuation of concepts with the holism of such key cognitive processes as inductive inference and the fixation of belief?” Pretty much all we know about the first question is that here Hume was, for once, wrong: mental representation doesn’t reduce to mental imaging. Fodor, 1994, pg. 113

    In other words, the mind is depicted as a representational system. But there is a disconnect between representations and the things being represented. For example, some representations may be false; that is (to simplify) the state of affairs represented does not actually exist. Hence representations cannot be caused entirely by the phenomena that cause them. Rather, they must be constructed, through some process of interpretation of those phenomena.

    The problem with depending on Fodor to set up the state of affairs is that a reference to Fodor brings with it quite a bit of baggage. Fodor, like Chomsky, argues that the linguistic capacity is innate. Fodor calls this ‘the language of thought’ and argues, not only that grammar and syntax are innate, as Chomsky argues, but also that the semantics are innate, that we are both with (the capacity to represent) all the concepts we can express. How is it that we can use the term ‘electric typewriter’ in a sentence? because we were born with it.

    But what if Fodor’s theory, in particular, and the representational theory of mind, in general, are wrong? What if perception and cognition are not the result of a process of ‘encoding’. What if the human mind is much more like Hume’s version (very misleadingly described as a blob of wax)? What if semantic properties, such as ‘truth’ and ‘falsehood’ (and moral properties, such as ‘right’ and ‘wrong’) are more like sensations or emotions, instead of an account of some sort of correspondence between a proposition in the mind (as interpreted through a constructed mental representation) and a state of affairs in the world?

    Because of Fodor’s perspective, he wants you to believe that empiricism promotes certain corollaries:

    1. The mind is a passive receiver of input and knowledge,
    2. Learning is independent of prior state and of context,
    3. The ideal form of learning is errorless learning.

    It is certainly debatable whether Hume would believe any of these, and they are certainly false of modern empiricism. Much is made of the failures of causal theories of perception (which is why simple encoding fails, and why a representational theory is required in its place). But what if, as Hume says, cause is nothing more than the natural human inclination to ascribe a relation between two objects when the one frequently follows from the other? What if causation itself is something humans bring to the table? This is certainly not passive perception – humans, on Hume’s theory, though ‘custom and habit’ interpret a perception as one thing or another.

    These considerations constitute a response to the interaction theory proposed in this paper. Representations, on this theory, constitute ‘interaction possibilities’, that is, possible responses an agent may undertake in response to given stimuli (or perceptions). These have all of the properties of representations (truth values, content) but – by virtue of being implicit, do not suffer from the pitfalls of representationalism. We don’t need to show how it was caused by this or that, because only the interaction possibility, not the representation itself, is caused by the phenomena. “Encoding models, in contrast, are not future oriented, but backward oriented, into the past, attempting to look back down the input stream.”

    Fair enough, and a spirited response to the myriad problems facing representational theories of mind (problems imposed from it more empiricist critics), but if Hume’s position (as understood here, and not mangled by Fodor) stands, then this proposition does not follow: “The world could not impress a competent interaction system into a passive mind. Interaction systems must be constructed.” And of course, if this does not follow, the need for scaffolding, and the attendant infrastructure required, does not follow.

    And Hume’s position stands. We are misled by the ‘wax’ analogy. Even the slightest inspection reveals that perceptions are not like metal stamps, nor are brains anything like lumps of wax. A brain is a complex entity, such that when a perception makes an impression on any bit of it (i.e., when a photon strikes a neural cell) the mind is not left with a resultant ‘dent’ but rather a myriad of disturbances and reflections, rather like the way water ripples when struck by a pebble or a raindrop. Some of these ripples and reflections have more or less permanent consequences; just as repeated waves form surface features, such as sandbars, that change the shape of subsequent waves, so also repeated perceptions form connections between neurons, that change the way the impact of a photon ripples through the neural network.

    The world, therefore, could impress a competent interaction system (so-called) into a passive mind. And therefore (happily) interaction systems (so-called) need not be constructed, which is a good thing.

    Why is it a good thing? Because if the interaction system (so-called – I am saying ‘so-called’ because the resulting neural structure may be described as an ‘interaction system’ or may be described as something else) is constructed then there must be some entity that does the constructing. And if this is the case, then there are only two possibilities:

    Either, 1, the construction is accomplished by the learner him or her self, which raises the question of how the learner could attain a mental state sufficiently complex to be able to accomplish such constructions, or

    2. an external agency must accomplish the construction, in which case the question is raised as to how the perceptions emanating from the external agent to the learning agent could be perceived in such a way as to accomplish that construction.

    The pragmatist turn does not resolve the problem. Indeed, it makes the problem even worse. Bickhard writes, ” Pragmatism forces a variation and selection constructivism: an evolutionary epistemology.” This means even more constructions must be constructed, both those that survive the ‘evolutionary trial’ and those that don’t.

    Indeed, the use of ‘evolutionary’ terminology to describe the state of affairs here is very misleading.

    The problem is, any representational theory – whether it employs virtual propositions or not – needs elements that are simply not found in nature. They need ‘truth’ and ‘representation’ and even (on most accounts) ‘causality’. They need, in other words, precisely the sort of things an intelligent agent would bring to the table. They need to be constructed in order to give them these properties. They need, in other words, to be created.

    Far from being an evolutionary theory of learning, this sort of theory is in fact a creationist theory of learning. It amounts to an assertion that the combination of a mind and some phenomena are not sufficient to accomplish learning, that some agency, either an intermediating external agency, or an internal homuncular agency, are needed. But both such agencies presuppose the phenomenon they are adduced to explain.

    In general, the ascription of such intentional properties – truth, meaning, causation, desire, right, interaction – which are not present naturally in the human mind or the phenomena it perceives can only be accomplished through some such circular form of reasoning. Historically, the existence of these properties has been used in order to deduce some necessary entity – an innate idea of God, an innate knowledge of grammar or syntax, or scaffolded construction, among others (the putative existence of this entity is then used to explain the phenomena in question, to add circularity on circularity).

    These properties, however, are interpreted properties. They constitute, at most, a way of describing something. They are names we use to describe various sets of phenomena, and do not exist in and of themselves. Consequently, nothing follows from them. Naming does not necessitate existence.

    Bickhard’s response:

    It is difficult to reply to something with so many mis-readings, both of my own work and of others.

    I cite Fodor concerning encodings because even he, as one of the paramount exponents of such a position, acknowledges that we don’t have any idea of how it could happen.

    Since the focus of all of my critical remarks is against such an encodingist position, it’s not clear to me how I end up being grouped with Fodor. Certainly nothing actually written commits me to any kind of innatism – that too is one of my primary targets in my general work. In fact, one of the primary paths away from the arguments for innatism is an emergentist constructivism. (This, of course, requires a metaphysical account of emergence – see the several papers and chapters that I have on that issue.)

    I don’t even know where to start regarding Hume, but there are some comments below as relevant to more specific issues.

    Representations, on this theory, constitute ‘interaction possibilities’, that is, possible responses an agent may undertake in response to given stimuli (or perceptions). These have all of the properties of representations (truth values, content) but – by virtue of being implicit, do not suffer from the pitfalls of representationalism. We don’t need to show how it was caused by this or that, because only the interaction possibility, not the representation itself, is caused by the phenomena.

    Representation is constituted, according to this model, by indications of interaction possibilities, not by interaction possibilities per se. And such indications are not caused, but, as attacked later, constructed.

    I fail to see how even the account of Hume given supports the claim:

    If Hume’s position (as understood here, and not mangled by Fodor) stands, then this proposition does not follow: “The world could not impress a competent interaction system into a passive mind. Interaction systems must be constructed.”

    That is:

    But what if, as Hume says, cause is nothing more than the natural human inclination to ascribe a relation between two objects when the one frequently follows from the other? What if causation itself is something humans bring to the table? This is certainly not passive perception – humans, on Hume’s theory, though ‘custom and habit’ interpret a perception as one thing or another.

    First, I’m not addressing cause at all. Second, Hume explicitly said that he had no idea how perception worked, so the claims being made on his behalf here are rather difficult to fit with Hume’s position. Third, interpretation, presumably based on custom and habit, is not necessarily passive, though Hume didn’t have much of a model of activity beyond association. Fourth, such “interpretations” are not themselves caused, so they constitute a partial gesture in the direction of construction. I’m arguing that such constructions are of indications of interaction potentials, and that the basic properties of representation are emergent in such indications. Fifth, independent of all of that, how does any such interpretation of Hume undo the basic point that “the world could not impress a competent interaction system into a passive mind”? There appears to be a serious non- sequitur here. The comments about ripples and reflections would both seem to advert to cause in the mental realm, and how could that be rendered coherent given the other comments about cause, and do not address issues of interaction or interaction systems at all.

    if the interaction system (so-called – I am saying ‘so-called’ because the resulting neural structure may be described as an ‘interaction system’ or may be described as something else) is constructed then there must be some entity that does the constructing.

    I fail to see this at all. By this reasoning, there must be some entity that does the constructing of life and organisms and the genome, etc. This truly does lead to creationism, but, if that is the position taken, then the path is pretty clear (it is as well pretty clear who takes such a position). On the other hand, the premise is clearly false. That is one of the central points of variation and selection constructivist models – things can be constructed, that fit particular selection criteria, without there being any external or teleological constructor. The possibility that the organism, mind, etc. does the constructing itself is dismissed with a question of how it becomes sufficiently complex to do that sort of thing. But the ensuing “discussion” seems to assume that there is no answer to this question. I have in fact addressed similar issues in multiple other places. And again, biological evolution itself is proof in principle of the possibilities of such “auto- construction”.

    Bickhard writes, ” Pragmatism forces a variation and selection constructivism: an evolutionary epistemology.” This means even more constructions must be constructed, both those that survive the ‘evolutionary trial’ and those that don’t.

    Sorry about that, but if constructions are possible, then they are possible, and if the lack of foreknowledge requires that many constructions be made that are ultimately found to fail, then get used to it. I take it that the author is also greatly exercised about biological evolution, which similarly involves lots of errors along the way.

    The problem is, any representational theory – whether it employs virtual propositions or not – needs elements that are simply not found in nature. They need ‘truth’ and ‘representation’ and even (on most accounts) ‘causality’. They need, in other words, precisely the sort of things an intelligent agent would bring to the table.

    Are human beings not part of nature? Are frogs not part of nature? If they are part of nature, then “representation”, “truth”, and so on are also part of nature, and are in fact found in nature. The problem is to account for that, not to sneer at attempts to account for it. Or, if the preferred answer is that they are not part of nature, then that agenda should be made a little more clear, and we could debate naturalism versus anti- naturalism (dualism?) – or perhaps a simple physicalist materialism?

    Far from being an evolutionary theory of learning, this sort of theory is in fact a creationist theory of learning. It amounts to an assertion that the combination of a mind and some phenomena are not sufficient to accomplish learning, that some agency, either an intermediating external agency, or an internal homuncular agency, are needed. But both such agencies presuppose the phenomenon they are adduced to explain.

    Since it is the author of these diatribes who rejected any kind of emergentist constructivism, it would seem that the epithet of “creationist” fits the other side. Certainly it does not fit the model I have outlined. Note also that the possibility of an agent doing his or her own construction is here rendered as “an internal homuncular agency”. Where did that come from (“homuncular” was not in the earlier characterization of “auto” construction)? If constructions can generate emergents, then internal constructions can generate emergents, and, if those emergents are of the right kind, then what is to be explained is not at all presupposed. If anything legitimately follows from anything in this rant, it follows from the authors own assumptions, not from mine.

    In general, the ascription of such intentional properties – truth, meaning, causation, desire, right, interaction – which are not present naturally in the human mind or the phenomena it perceives can only be accomplished through some such circular form of reasoning. Historically, the existence of these properties has been used in order to deduce some necessary entity – an innate idea of God, an innate knowledge of grammar or syntax, or scaffolded construction, among others (the putative existence of this entity is then used to explain the phenomena in question, to add circularity on circularity).

    Earlier, causation at least was located solely in the human mind. But I take it from this that intentionality is in toto supposed to be not a real class of phenomena; none of these

    properties or phenomena actually exist – ?? If that is the position, then to what is the illusion of intentionality presented, or in what is the illusion of intentionality generated (constructed?). I cannot make enough sense of this to even criticize it. If what is being asked for (though not very politely) is an account of how such circularities regarding normative and intentional phenomena are to be avoided, then I would point to, for example:

    Bickhard, M. H. (2006). Developmental Normativity and Normative Development. In L. Smith, J. Voneche (Eds.) Norms in Human Development. (57-76). Cambridge: Cambridge University Press.

    Bickhard, M. H. (2005). Consciousness and Reflective Consciousness. Philosophical Psychology, 18(2), 205-218.

    Bickhard, M. H. (2004). Process and Emergence: Normative Function and Representation. Axiomathes — An International Journal in Ontology and Cognitive Systems, 14, 135-169. Reprinted from: Bickhard, M. H. (2003). Process and Emergence: Normative Function and Representation. In: J. Seibt (Ed.) Process Theories: Crossdisciplinary Studies in Dynamic Categories. (121-155). Dordrecht: Kluwer Academic.

    These properties, however, are interpreted properties. They constitute, at most, a way of describing something. They are names we use to describe various sets of phenomena, and do not exist in and of themselves. Consequently, nothing follows from them. Naming does not necessitate existence.

    Since intentionality seems to have been denied, I fail to understand what “interpretation” or “naming” could possibly be. So, on his own account, these sentences seem to be meaningless – the basic terms in them have no referents (but, then, what is reference?).

    I apologize for my paper having been the occasion for such mean spirited nugatory “discussion”. I have tried to keep responses “in kind” to a minimum. I am not accustomed to such as this, though perhaps it constitutes a “learning experience”.

    My reply:

    Mark H. Bickhard wrote:

    It is difficult to reply to something with so many mis-readings, both of my own work and of others.

    I think this comment has as much to do with the other discussion as with this.

    I cite Fodor concerning encodings because even he, as one of the paramount exponents of such a position, acknowledges that we don’t have any idea of how it could happen.

    Since the focus of all of my critical remarks is against such an encodingist position, it’s not clear to me how I end up being grouped with Fodor.

    One person can be against a person in one way, and grouped with him in another. A Protestant may be different from a Catholic, but this is not an argument against lumping them together as Christians. Similarly, though you disagree with Fodor on encoding, you nonetheless agree with him on mental contents (specifically, that they exist, that they have semantical properties, that they constitute representations, etc.). “Such indications of interaction possibilities,” you write, “I will claim, constitute the emergence of a primitive form of representation.” Moreover, “such indications of interactive potentiality have truth value. They can be true or false; the indicated possibilities can exist or not exist. The indications constitute implicit predications of the environment — this environment is one that will support this indicated kind of interaction — and those predications can be true or false.”

    Related: Clark Quinn asks, “Stephen, are you suggesting that there are no internal representations, and taking the connectionist viewpoint to a non-representational extreme?” Generally, yes. Though I wouldn’t call it an “extreme”. But let me be clear about this. I do not deny that there is a representationalist discourse about the mind (to deny this would be to deny the obvious). People certainly talk about mental contents. But it does not follow that mental contents exist. Just as, people may talk about unicorns, but it doesn’t follow that unicorns exist. To me, saying ‘there are representations’ and saying ‘there are interaction possibilities’ is to make the same kind of move, specifically, to look at what might generally be called mental phenomena, and to claim to see in them something with representational and semantic properties. But since these properties do not exist in nature, it follows that they cannot be seeing them. Therefore, they are engaged in (as Hume might say) a manner of speaking about mental properties.

    I am certainly not the first person to make this sort of observation. You could liken it to Dennett’s ‘intentional stance’ if you like, though I would find a more apt analogy to be the assertion that you are engaging in a type of ‘folk psychology’ as described by people like Churchland and Stich. Yes, as Quinn suggests, a learning system can bootstrap itself. But there are limits. A learning system cannot bootstrap itself into omniscience, for example. As Quinn suggests, “the leap between neural networks and our level of discourse being fairly long.” And in some cases, impossibly long – you can’t get there from here. And my position is that the sort of system Bickhard proposes is one of those.

    Certainly nothing actually written commits me to any kind of innatism – that too is one of my primary targets in my general work.

    I did not write that you are committed to innatism. I wrote that the position you take commits you to either innatism or external agency.

    The reason is, if a mind (a neural network) cannot bootstrap itself into the type of representation you describe here, then the representation must come from some other source. And the only two sources are innate abilities (the move that Fodor and Chomsky take) or an external agency (the move creationists take). You can disagree with my primary assertion – you can say we can too get from there to here (though I don’t see this as proven in your paper). But if my primary position is correct, then there is really no dispute that you are forced into one or the other alternative.

    What you are in fact doing is giving us a story about external agency. This is evident, for example, when you say “It depends on whether or not the current environment is in fact one that would support the indicated kind of interaction.” You want ‘the environment’ to be the external agent. But the environment works causally. And the environment does not (except via some form of creationism) work intentionally. It doesn’t assert (contra the language you use) any sort of notion of ‘true’ or ‘false’; it just is. What is happening is that you are giving the environment properties it does not have, specifically, counterfactuals, as in “one that would support the indicated kind of interaction.” But there is no fact of the matter here. An environment’s counterfactual properties depend on our theories about the world (that’s why David Lewis takes the desperate move of arguing that possible worlds are real).

    In fact, one of the primary paths away from the arguments for innatism is an emergentist constructivism. (This, of course, requires a metaphysical account of emergence – see the several papers and chapters that I have on that issue.)

    I have looked at what you have posted online.

    I don’t even know where to start regarding Hume, but there are some comments below as relevant to more specific issues.

    Representations, on this theory, constitute ‘interaction possibilities’, that is, possible responses an agent may undertake in response to given stimuli (or perceptions). These have all of the properties of representations (truth values, content) but – by virtue of being implicit, do not suffer from the pitfalls of representationalism. We don’t need to show how it was caused by this or that, because only the interaction possibility, not the representation itself, is caused by the phenomena.

    Representation is constituted, according to this model, by indications of interaction possibilities, not by interaction possibilities per se. And such indications are not caused, but, as attacked later, constructed.

    With all due respect, I consider this to be a sleight of hand. Let’s work through this level by level.

    There are, shall we say, states of affairs – ways the world actually is.

    Then there are representations – things that stand for the way the world actually is (the way you can use a pebble, for example, to stand for Kareem Abdul-Jabbar).

    One type of representation (the type postulated by Fodor and company) is composed of sentences (more specifically, propositions). The difficulties with this position are spelled out in your paper. But another type of representation, postulated here, is composed of interactions.

    Except that the interactions do not yet exist, because they are future events. Therefore, they exist only as potentials, or as you say (borrowing from Derrida?) “traces” of interactions.

    Well, what can a ‘trace’ be if it is not an actual interaction?

    It has to be exactly the same sort of thing Fodor is describing, but with a different name. It has to be some sort of counterfactual proposition. Only a counterfactual proposition can describe counterfactuals and stand in a semantical relation (ie., be true or false) to the world.

    That’s why I think this is just a sleight of hand.

    I fail to see how even the account of Hume given supports the claim:

    If Hume’s position (as understood here, and not mangled by Fodor) stands, then this proposition does not follow: “The world could not impress a competent interaction system into a passive mind. Interaction systems must be constructed.”

    That is:

    But what if, as Hume says, cause is nothing more than the natural human inclination to ascribe a relation between two objects when the one frequently follows from the other? What if causation itself is something humans bring to the table? This is certainly not passive perception – humans, on Hume’s theory, though ‘custom and habit’ interpret a perception as one thing or another.

    First, I’m not addressing cause at all.

    I’ll give you this, but claim a chit, which I’ll cash in below.

    Second, Hume explicitly said that he had no idea how perception worked, so the claims being made on his behalf here are rather difficult to fit with Hume’s position.

    Hume writes, “All the perceptions of the human mind resolve themselves into two distinct kinds, which I shall call IMPRESSIONS and IDEAS. The difference betwixt these consists in the degrees of force and liveliness, with which they strike upon the mind, and make their way into our thought or consciousness.” And “There is another division of our perceptions, which it will be convenient to observe, and which extends itself both to our impressions and ideas. This division is into SIMPLE and COMPLEX.” And “Having by these divisions given an order and arrangement to our objects, we may now apply ourselves to consider with the more accuracy their qualities and relations. This is from the Treatise, Book 1, Part 1, Section 1.203

    Given that he then went on to compose three volumes based on the account of perceptions outlines here, I would say that he believed that he did indeed have a very clear idea of how perception works. What he does not claim to know, of course, is how perceptions are caused. But that is a very different matter.

    For as to the specific claim about causation, “Cause is nothing more than the natural human inclination to ascribe a relation between two objects when the one frequently follows from the other,” I turn to the Enquiry: “Suppose a person… to be brought on a sudden into this world… He would not, at first, by any reasoning, be able to reach the idea of cause and effect… Their conjunction may be arbitrary and casual. There may be no reason to infer the existence of one from the appearance of the other Suppose, again, that he has acquired more experience, and has lived so long in the

    world as to have observed familiar objects or events to be constantly conjoined together; what is the consequence of this experience? He immediately infers the existence of one object from the appearance of the other And though he should be convinced that his understanding has no

    part in the operation, he would nevertheless continue in the same course of thinking. There is some other principle which determines him to form such a conclusion This principle is Custom

    or Habit.” Enquiry Section 5, Part 1, 35-36.204

    I maintain that I have represented Hume correctly.

    Third, interpretation, presumably based on custom and habit, is not necessarily passive, though Hume didn’t have much of a model of activity beyond association.

    I was not the one to make that assertion. Hume is an empiricist, and it was you who cited the principle that ‘The mind is a passive receiver of input and knowledge’. As suggested, by ‘custom and habit’ Hume doesn’t mean much beyond association. I am willing to allow slightly more; for example, I have in presentations asserted that beyond simple Hebbian association we can also postulate activity such as Boltzmann ‘settling’ and ‘annealing’ along with, of course, some story about back propagation (though, of course, that story involves past ‘training’ events, not postulated traces of future training events).

    I think that this seems to me to be non-controversial as a principle, that insofar as there is a model of activity, this model of activity cannot ascribe to that activity forces other than the state and nature of the brain itself, and stimulations of that brain (aka ‘perceptions’). Specifically (and this is where Clark Quinn calls me ‘radical’) I argue that it cannot include the postulation of events or entities with semantical properties (aka ‘mental contents’, ‘propositions’, ‘representations’, and relevant to the current discussion, ‘counterfactuals’). Because – though you don’t want me to lump you in with Fodor – the same sort of problems ‘encodings’ have are shared by these other events or entities.

    Fourth, such “interpretations” are not themselves caused, so they constitute a partial gesture in the direction of construction.

    I’ll give you this – but claim the same chit I did above. We’ll come back to this.

    I’m arguing that such constructions are of indications of interaction potentials, and that the basic properties of representation are emergent in such indications.

    Fifth, independent of all of that, how does any such interpretation of Hume undo the basic point that “the world could not impress a competent interaction system into a passive mind”?

    By “a competent interaction system into a passive mind” I mean the sort of entity you describe, that stands in a semantical relation to the world.

    There appears to be a serious non-sequitur here. The comments about ripples and reflections would both seem to advert to cause in the mental realm, and how could that be rendered coherent given the other comments about cause, and do not address issues of interaction or interaction systems at all.

    … and yet does not advert to cause.

    The comment about ripples and reflections is a metaphor to suggest that the same kind of thing happens in the brain. ‘Causation’ is the theory used to explain both. My views on the nature of causation are similar to Hume’s.

    And – just as there is no ‘truth’ or ‘representation’ or ‘indications of interaction potentials’ in the ripples in the pond, nor either are there any such in the brain.

    if the interaction system (so-called – I am saying ‘so-called’ because the resulting neural structure may be described as an ‘interaction system’ or may be described as something else) is constructed then there must be some entity that does the constructing.

    I fail to see this at all. By this reasoning, there must be some entity that does the constructing of life and organisms and the genome, etc. This truly does lead to creationism, but, if that is the position taken, then the path is pretty clear (it is as well pretty clear who takes such a position). On the other hand, the premise is clearly false.

    OK, now I’m claiming my chit. You are saying the following:

    That is one of the central points of variation and selection constructivist models – things can be constructed, that fit particular selection criteria, without there being any external or teleological constructor.

    Now of course a “variation and selection model” is, essentially, evolution. In a thing that can be reproduced (such as, say, a gene) introduce some sort of variation (such as, say, a mutation) in various reproductions. Then, though some sort of test (such as, say, survival) select one of those variations to carry on the reproductive chain. It is, in other words, a fancy way of saying ‘trial and error’.

    Strictly speaking, “variation and selection constructivism” is a misnomer. The term ‘construction’ implies a deliberately formed entity with some goal or purpose in mind – in other words, an act of creation. It’s like coming up with a theory of ‘evolutionary creationism’.

    Still, leaving the connotations aside, there is a story that can be told here. But there is a crucial difference between ‘variation and selection’ and what is being offered here.

    An analogy: there is no sense to be made of the assertion that the species that remain, red in tooth and claw, after the ravages of natural selection, are ‘true’. Nor indeed would anybody say that they ‘represent’ nature. It is even a stretch to say that they are the ‘best’. They just happen to be what was left after repeated iterations of a natural process. There was no sense of truth, representation or morality in the process that created them, and hence there is no sense of truth, representation or morality in what was created. Even the phrase ‘survival of the fittest’ attributes an intentionality that is just not present. It could equally well be (given our world’s experiences with comets and ice ages and humans) ‘survival of the luckiest’. Certainly, the major attribute that explains the survival of, say, kangaroos, is ‘living in Australia’.

    So – even if a process of trial and error, or shall we say, variation and selection, results in a given mental state, from whence does it obtain its semantic properties? The state of affairs that produces a mental state could indeed produce any number of mental states (and has, so far, produced roughly ten billion of them through history). It would be a miracle that any of them, all by itself, would become representative, much less true.

    The word ‘construction’ implies a ‘construction worker’ for a reason. The word ‘construction’ suggests semantic attributes. That is why it is no surprise to see Bickhard claim them in his essay.

    So what is the difference between natural selection, which does not produce semantic properties, and variation and selection constructivism, which does?

    It is this: the entities or events that do the selection in natural selection actually exist. They are past entities that could have actually informed the selection. The entities in the model postulated here, however, do not exist in the brain or the natural world. They are future events, counterfactuals, potentials or traces. They exist only insofar as they are postulated. But if they are postulated, we are begging the question of how they were created in the first place.

    Natural selection makes a great scientific theory. It explains numerous phenomena, from the existence of alligators to the operation of the immune system. But natural selection makes a lousy semantic theory. The only way to introduce ‘truth’ or ‘representation’ or ‘content’ into such a system is to invent it, to introduce it surreptitiously using some sort of sleight of hand, as I have described above.

    The possibility that the organism, mind, etc. does the constructing itself is dismissed with a question of how it becomes sufficiently complex to do that sort of thing. But the ensuing “discussion” seems to assume that there is no answer to this question. I have in fact addressed similar issues in multiple other places. And again, biological evolution itself is proof in principle of the possibilities of such “auto-construction”.

    Biological evolution is proof of no such thing.

    It is a mangling of the language to say that animals were ‘constructed’.

    There is, indeed, self-organization. I have referred to it myself many times. But it is not a process of ‘construction’. It is not imbued of intentional properties. Mental states do not become

    ‘true’ or ‘representational’ because we evolve into them. We do not in any way ‘select’ them; actual phenomena (and not non-existing counterfactuals) strengthen one or another in our minds.

    Bickhard writes, ” Pragmatism forces a variation and selection constructivism: an evolutionary epistemology.” This means even more constructions must be constructed, both those that survive the ‘evolutionary trial’ and those that don’t.

    Sorry about that, but if constructions are possible, then they are possible, and if the lack of foreknowledge requires that many constructions be made that are ultimately found to fail, then get used to it. I take it that the author is also greatly exercised about biological evolution, which similarly involves lots of errors along the way.

    Every iteration of a duck is slightly different from every other. I don’t have a problem with that. I believe that the reproduction of ducks, of multiple diverse types of ducks, is a good thing.

    But what I don’t believe is that a reproduction of a duck can be described as a test of such-and- such a theory, that the natural variation of ducks produces some sort of ‘true’ duck or even an ‘optimal’ duck, much less a ‘representational’ duck. A duck is just a duck. It doesn’t mean anything.

    The problem is, any representational theory – whether it employs virtual propositions or not – needs elements that are simply not found in nature. They need ‘truth’ and ‘representation’ and even (on most accounts) ‘causality’. They need, in other words, precisely the sort of things an intelligent agent would bring to the table.

    Are human beings not part of nature? Are frogs not part of nature? If they are part of nature, then “representation”, “truth”, and so on are also part of nature, and are in fact found in nature. The problem is to account for that, not to sneer at attempts to account for it. Or, if the preferred answer is that they are not part of nature, then that agenda should be made a little more clear, and we could debate naturalism versus anti- naturalism (dualism?) – or perhaps a simple physicalist materialism?

    When you say things like “The problem is to account for that, not to sneer at attempts to account for it” you are making exactly the same move people like Chomsky and Fodor make (you may as well have said ‘poverty of the stimulus’ and quoted Chomsky directly).

    We have, it is argued, the capacity to think of universals, such as ‘all ducks quack’. But universals do not exist in nature (because they extend to non-existent future events). Therefore… what? Chomsky says they must exist in the mind. You say… what? That they are the result of trial and error? How would that work for these non-existing future events?

    What is the case, in fact, is that what we think are universals, what we call universals, are not actually universals. They are summarizations, they are abstractions, they are something that can actually coexist with the stimuli, however impoverished.

    You cannot assume representations in order to argue for a representational theory of mind.

    Far from being an evolutionary theory of learning, this sort of theory is in fact a creationist theory of learning. It amounts to an assertion that the combination of a mind and some phenomena are not sufficient to accomplish learning, that some agency, either an intermediating external agency, or an internal homuncular agency, are needed. But both such agencies presuppose the phenomenon they are adduced to explain.

    Since it is the author of these diatribes who rejected any kind of emergentist constructivism, it would seem that the epithet of “creationist” fits the other side. Certainly it does not fit the model I have outlined. Note also that the possibility of an agent doing his or her own construction is here rendered as “an internal homuncular agency”. Where did that come from (“homuncular” was not in the earlier characterization of “auto” construction)? If constructions can generate emergents, then internal constructions can generate emergents, and, if those emergents are of the right kind, then what is to be explained is not at all presupposed. If anything legitimately follows from anything in this rant, it follows from the authors own assumptions, not from mine.

    This really is a gloss of my position, and not a particularly kind one. I hope this version of it is clearer.

    In general, the ascription of such intentional properties – truth, meaning, causation, desire, right, interaction – which are not present naturally in the human mind or the phenomena it perceives can only be accomplished through some such circular form of reasoning. Historically, the existence of these properties has been used in order to deduce some necessary entity – an innate idea of God, an innate knowledge of grammar or syntax, or scaffolded construction, among others (the putative existence of this entity is then used to explain the phenomena in question, to add circularity on circularity).

    Earlier, causation at least was located solely in the human mind. But I take it from this that intentionality is in toto supposed to be not a real class of phenomena; none of these properties or phenomena actually exist – ?? If that is the position, then to what is the illusion of intentionality presented, or in what is the illusion of intentionality generated (constructed?). I cannot make enough sense of this to even criticize it.

    Oh goodness, what an equivocation.

    When I say that ‘unicorns only exist in the mind’ I am not in any way asserting that large (or I guess very tiny?) horned horses are prancing about the cerebral cortex.

    If what is being asked for (though not very politely) is an account of how such circularities regarding normative and intentional phenomena are to be avoided, then I would point to, for example:

    Bickhard, M. H. (2006). Developmental Normativity and Normative Development. In L. Smith, J. Voneche (Eds.) Norms in Human Development. (57-76). Cambridge: Cambridge University Press.

    Bickhard, M. H. (2005). Consciousness and Reflective Consciousness. Philosophical Psychology, 18(2), 205-218.

    Bickhard, M. H. (2004). Process and Emergence: Normative Function and Representation. Axiomathes — An International Journal in Ontology and Cognitive Systems, 14, 135-169. Reprinted from: Bickhard, M. H. (2003). Process and Emergence: Normative Function and Representation. In: J. Seibt (Ed.) Process Theories: Crossdisciplinary Studies in Dynamic Categories. (121-155). Dordrecht: Kluwer Academic.

    These properties, however, are interpreted properties. They constitute, at most, a way of describing something. They are names we use to describe various sets of phenomena, and do not exist in and of themselves. Consequently, nothing follows from them. Naming does not necessitate existence.

    Since intentionality seems to have been denied, I fail to understand what “interpretation” or “naming” could possibly be. So, on his own account, these sentences seem to be meaningless – the basic terms in them have no referents (but, then, what is reference?).

    I apologize for my paper having been the occasion for such mean spirited nugatory “discussion”. I have tried to keep responses “in kind” to a minimum. I am not accustomed to such as this, though perhaps it constitutes a “learning experience”.

    To take offense at my response is ridiculous. It was certainly not mean-spirited, rude, or anything else. Again, I think you are attributing the properties of some other discussion to this one. I cannot otherwise understand why you would object to my response.

    Indeed, in the spirit of completeness, perhaps you can point to sentences in my previous response where I was in fact mean spirited, nugatory, rude, or anything else. What specific sentences did you find objectionable? I most certainly have no wish to cause offense, though I certainly do not take that to preclude the possibility of disagreeing with you.

    I submit that I interpreted your position correctly, interpreted Hume correctly (among others), and have fairly and successfully criticized your presentation, and that I did so in an academically responsible manner.

    (Update: 2011 – the moderator of ITForum refused to allow this response to be posted on the list, and to allow me to continue this discussion. I therefore retired from all further discussions on ITForum.)

    Footnotes

    202 Mark H. Bickhard. Learning is Scaffolded Construction. IT Forum.2006. http://it.coe.uga.edu/itforum/paper98/LearnScafCon.pdf

    203 http://www.class.uidaho.edu/mickelsen/ToC/hume%20treatise%20ToC.htm

    204 David Hume. An Enquiry Concerning Human Understanding. Section 5, Part 1, 35-36 http://darkwing.uoregon.edu/%7Erbear/hume/hume5.html

    Moncton, April 23, 2007

     

    ‌Connectivism, Peirce, and All That

    I was asked:

    You drew a black box, and typed the words Black Box over it. You then started to talk about that more when I typed in the IM something about C.S. Peirce’s triads, to which you responded vocally: “I’m trying to get away from that” (or words very much like that).

    I really need to understand why you are trying to get away from Peircian representations of the triadic relation between the signs and symbols we use for things in our knowledge bases.

    Off the top of my head (so my wording may not be precise, recollections not exact, etc.) From where I sit, the picture from word to object is fraught with difficulties.

    • there is the case where the object does not exist, and yet the word continues to have meaning. For example, ‘brakeless trains are dangerous’, to borrow from Russell. The whole area of counterfactuals in general. Which, if we follow the inferential trail, would have us believing with David K. Lewis that possible worlds are real. So minimally the meaning of the word, with respect to the object, must take place with respect to a theory or theoretical tradition.
    • there is the case of indeterminacy of translation. The meaning of the word, with respect to the object, may be for one person very different from that of another person. Quine: the word ‘gavagai’ may refer to the rabbit itself, or the rabbit-stage of adulthood, or something else. Our inferences regarding meaning must be based on ‘analytic hypotheses’, which are themselves tentative.
    • the skeptical argument. Inferences based on words are underdetermined with respect to the reference of those words. Nelson Goodman, for example – the extension of ‘green’ is the same as ‘grue’, yet the next instance of an object is ‘green’ but not ‘grue’. Therefore the meaning of ‘green’ and ‘grue’ are different, despite being established through the exact same set of experiences and/or objects in the world. This argument is similar to the private language argument as depicted by Kripke in his account of Wittgenstein’s thesis of ‘meaning is use’.

    And so on..So, the approach to meaning I have adopted and understand to be a better way of thinking about it:

    • the meaning of the word does not lie in anything distinct from actual instances of the word (by analogy: the colour ‘red’ does not lie in anything distinct from instances of the colour ‘red’; the quantity ‘1’ does not lie in anything other than instances of the quantity ‘1’).
    • these instances occur in two separate environments, a personal environment, composed of neurons and connections, thoughts, perceptions, etc., and a public environment, composed of people, artifacts, architecture, other objects in the world, utterances, radio transmissions, etc.
    • in each of these environments, instances of the word are embedded in a network of non- meaningful entities. In a person, thoughts (beliefs, memories, knowledge of, etc.) the word are contained in a network of neurons, no one of which (or identifiable set of which) comprises the word itself or the meaning of the word. Similarly, in the public environment, instances of a word appear in a wider network of non-meaningful entities (marks on paper, audio waves, digital data).
    • our perception of the word itself, and of the meaning of the word (for that’s what it is) is a form of pattern-recognition. Meaning is emergent from a substrate of non-meaningful, but connected, entities. In the personal environment, the meaning of the word is the perception of the word as an emergent phenomenon; in the social environment, the meaning of the word is the use of the word. (Thus, conversely, any emergent phenomenon, any artifact that is used, can have meaning, but again, the meaning is nothing more than the perception and use of that artifact).

    There is not a ‘stands for’ relationship; words are 9as they could say in database theory) ‘content-addressable’.

    Moncton, January 30, 2011

     

    Brakeless Trains – My Take

     

    (Note to philosophers: this represents only my take on the brakeless trains205 example, and is not intended to be a full and accurate depiction of Russell’s argument 206 (and does not even mention Strawson’s response).207 208 I am concerned here not for exegetical accuracy, but rather, a clear tracing of my thinking on the subject.)

    On 02/25/2011 2:26 PM, Savoie, Rod wrote, referring to ‘Connectivism, Peirce and All That209:

    “there is the case where the object does not exist, and yet the word continues to have meaning. For example, ‘brakeless trains are dangerous’, to borrow from Russell. The whole area of counterfactuals in general. Which, if we follow the inferential trail, would have us believing with David K. Lewis that possible worlds are real. So *minimally* the meaning of the word, with respect to the object, must take place with respect to a theory or theoretical tradition.”

    Is there a word in that sentence (“brakeless trains are dangerous”) that continues to have meaning whereas the object does not exist?

    I replied:

    Yeah – there’s no such thing as a ‘brakeless train’ – all trains, and all; trains that have ever existed, have had brakes. So there is nothing that the noun phrase ‘brakeless trains’ refers to.

    When you combine symbols (brakeless & trains), it is more a logic problem than a symbol/object problem.

    The quick answer is to say we can just combine the terms. But when we are trying to understand the meaning of the sentence, combining terms is insufficient.

    Let me explain (again, loosely following Russell):

    When we say “brakeless trains are dangerous”, are we saying “there exists an x such that x is a brakeless train and x is dangerous”? Well, no, because we are not saying “there exists an x such that x is a brakeless train.”

    So, how about, “there exists an x such that x is brakeless and x is a train and x is dangerous”? This is the ‘combining terms’ approach. But no, because we are not saying “there exists an x such that x is brakeless and x is a train.”

    Therefore, the statement “Brakeless trains are dangerous” cannot be rendered as an existential statement.

    What we really mean by the statement is the counterfactual: “For all x, if x is a brakeless train, then x is dangerous.” But what could we mean by such a statement? If meaning is what the statement refers to, or what makes the statement true, then the statement is essentially meaningless, because

    “For all x, if x is a brakeless train, then x is dangerous.” is equivalent to

    “For all x, x is not a brakeless train or x is dangerous.”

    which means that our meaning is satisfied by reference to all things that are not brakeless trains, that is to say, everything in the world. Which means that our statement has exactly the same meaning as “The present king of France is dangerous,” as the two sentences refer to exactly the same set of entities.

    Perhaps, you might think, what we are talking about is not a union of the two sets, but an intersection. But the intersection of the set of ‘brakeless trains’ and the set of ‘things that are dangerous’ is empty, because there are no brakeless trains. Creating a three-set Venn diagram does not help either, because the intersection of ‘things that are brakeless’, ‘things that are trains’, and ‘things that are dangerous’ is also empty.

    But what does it mean to combine symbols, if it does not mean to create the intersection of sets of objects denoted by the separate symbols? This is especially the case for a philosophy in which all statements depend on reference to experience for their truth. But even if you allow that some statements do not depend on reference to experience for their truth, the problem nonetheless remains, because there is no apparent way to create an inference to the conclusion ‘brakeless trains are dangerous’ that is not derived from the empty set, ie., derived from a contradiction.

    Symbols are not limited to physical objects (because you seem to make that inference in your argument, but I do know that you don’t necessarily think that).

    Quite so. Symbols are not limited to physical objects. But for those symbols that are not limited to physical objects, where do they get their meaning? In semiotics generally, it must be something that is not the symbol itself; it must be from whatever the symbol signifies. Because it is the state of affairs in whatever the symbol signifies that will, for example, allow us to determine whether a statement containing the symbol is true or false.

    You can make stuff up. You can give ‘nothingness’ a sense. (Sartre) Or ‘time’. (Heidegger) Or ‘history’ (Marx). Or ‘spirit’. (Hegel). Or space-time. (Kant) Or the self. (Descartes) But to

    philosophers who base their meaningfulness in experience, such as the positivists (and such as myself), the philosophies thus created are literally meaningless. What makes a statement involving one or the other of them true or false? The appeal is always to some necessity inherent in the concept. But necessities are tautologies, and from a tautology, nothing follows.

    So what do symbols mean, if they do not refer to physical objects? This now becomes the basis for the issues modern philosophy. By far the primary contender is this: the symbols derive their meaning from a representation, where the representation may or may not have a direct grounding in the physical world.

    For example, i – the square root of -1. It is clear that i does not refer to any number, because the square root of -1 does not exist. Nonetheless, the symbol i has a meaning – I just stated it – and this meaning is derived from the fact that it is postulated by, or embedded in, a representation of reality, ie., mathematics.

    But what is the grounding for a representation? If we say “i has meaning in P’, where does the representational system P obtain its meaning? It must have some, if only to distinguish it from being a ‘castle in the air’. But more, if there is to be any commonality of representation, any communication between people using representational systems, then the representational system must in some way be externally grounded. Because, if i derives its meaning from its being embedded in the representational system, then, so does the symbol ‘1’. Because if you allow parts of your representation to have their meaning derived totally by reference to the physical world, you’re right back where you’ve started, with essential elements of the system (like time, negation, self) without any external referent.

    There are some options:

    • picture (early Wittgenstein) – the representation is a picture or image of that which is represented
    • coherence (Davidson) – it is the internal consistence of the representation itself that guarantees its truth
    • cognitivism (Fodor) – the representation is innate
    • possible worlds (Lewis) – the representation is grounded by reference to possible worlds
    • pragmatism (James) – the representation is useful
    • use, or pragmaticism (Peirce) – the effect of the meaning on action, or (later Wittgenstein) the use of the representationIn special cases, there are even more options. In probability theory, for example, there are three major interpretations:
    • logical (Carnap) – the probability is the percent of the logical possibilities in which p is true
    • frequency (Reichenbach) – the probability is the observed frequency in which p is true
    • interpretive (Ramsey) – the percentage at which you would bet on p being true

    As you can see, any of these could be applied to the statement that ‘brakeless trains are dangerous’ and we would have a story to tell, everything from the idea (from Davidson) that it is consistent and coherent with our understanding of trains, if not derived from it, that brakeless

    trains are dangerous, to (James) the usefulness of posting a sign to that effect in a train factory, to (Ramsey) how much an insurance company would be willing to cover you for were you to ride on a brakeless trains.

    Which of these is true? They all are. Or to be more precise: none of them are. There is no external reality to which any of these ‘representations’ needs to set itself against in order to be true (or effective, or useful, etc.). They are each, in their own way, a self-contained system. And each of our representations of the world is a combination of some, or all, of them. The meaning of any given term in a representation is distributed across the elements of that representation, and the meaning of the term consists in nothing over and above that.

    The entities though so vital to the determination of truth in a representation – external objects, self, time, being, negation – are elements of the representation. The representation represents – no, is – the sum total of our mental contents.

    So we come back to the initial question:

    Is there a word in that sentence (“brakeless trains are dangerous”) that continues to have meaning whereas the object does not exist?

    And it follows that, if the phrase ‘brakeless trains’ does not refer to, or even represent, some external reality, none of the words in that sentence does. There are not special cases where some words refer and other words do not; all the words are, as it were, in the same boat. The case of ‘brakeless trains’ illustrates a case that applies to all words, even if it is only most evident in this particular example.

    Footnotes

    205 OK, Russell’s actual example is “the present king of France is bald” – the ‘brakeless trains’ example has a much older history as an example of the existential fallacy. I probably got it from Richard Braithwaite, Scientific Explanation: A Study of the Function of Theory, Probability and Law in Science. Cambridge University Press. May 1, 1968. p. 305. But the point still holds – ‘the present king of France’ can be true or false even if there is no king of France (it’s false, because there is no king of France).

    206 Bertrand Russell. On Denoting. Mind, New Series, Vol. 14, No. 56, pp. 479–493, October, 1905. http://cscs.umich.edu/~crshalizi/Russell/denoting/

    207 Strawson, P. F. (July 1950). “On Referring”. Mind 59 (235): 327, July, 1950. http://www.jstor.org/discover/10.2307/2251176?uid=16783488&uid=3739416&uid=2&uid=3737720&uid=16732880&uid=3&uid=67&uid=62 &sid=21100694093301

    208 Sveinbjorn Thordarson. Strawson’s Critique of the Russellian Theory of Description. Website. October 31, 2005. http://sveinbjorn.org/strawson-russell

    209 Stephen Downes. Connectivism Peirce and All That. Half an Hour (weblog). February 4, 2011. http://halfanhour.blogspot.ca/2011/02/connectivism-peirce-and-all-that.html

    Moncton, February 4, 2011

     

    Meaning as Medium

    McLuhan’s ‘The medium is the message’ has always been interpreted to mean discussion about the physical substrate. That allows people to talk about an electric light build as carrying a message. or to say things like ‘the same content on television means something different than that content in a newspaper’. Etc.

    But I thing there’s another, more subtle, aspect to the slogan ‘the medium is the message’. And that is this: that the ‘meaning’ of a message isn’t the meaning of the words (say) contained in the message. That this content is the carrier for the message, which is (in a certain sense) subsymbolic. For example, when you say ‘Get out of town’ to a lawbreaker, you mean one thing, and when you say ‘Get out of town’ jokingly to a friend, you say something else. The ‘message’ – that is, the words ‘Get out of town’ – do not constitute the content of the message at all; the ‘content’ is actually the reaction produced in the receiver by the message (which is why an electric light bulb and a 300 page book can both be messages).

    Now we can take this a step further (and this is what I think of ‘the medium is the meaning’). The ‘meaning’ of the message, properly so-called, is constituted of the state of affairs described (referred to, represented by) the message. Thus, ‘snow is white’ means that snow is white. But this meaning is not the content of the message. You may be telling me that ‘snow is white’ but what you are actually saying depends on a wide range of factors – whether or not I had previously thought that snow was white, for example. On this view, again, you would think of the meaning as the carrier of the content.

    But what is the message? It is a bit misleading to think of it as something that is actually ‘carried’. Because, at best, it represents some intent on the part of the sender, and intent isn’t something that can be carried in a message (it can be expressed in a message, but this is something very different). This is important because it breaks down the idea that there is some zone of shared meaning (or whatever it’s called) between the two speakers. Even if there is a shared meaning, it’s irrelevant, because the meaning is just the medium. It is simply the place where the interaction occurs. There is an interaction, but the interaction is not the transfer for some meaning. Rather, it is an attempt by a sender to express an intent – that is, to carry our some action (specifically, the action of causing (something like) a desired brain-state to occur in the listener).

    The ‘content’, as McLuhan would say, is the receiver. More precisely, the content is the resulting brain state. The content is the change in belief, attitude, expression, etc., in the listener, that is a result of the transmitting of the message, the rest of the environment at the time, and the receiver’s internal state. “What colour is the wall,” asked the listener. You turn on the light bulb. “Ah, I see,” he says.

    This entire system is fraught with incompleteness and vagueness. The sender, for example, can only have a partial idea of the content he or she is actually sending with a message. There is the sender’s intended content (‘the wall colour is green’) which – inescapably – becomes

    entwined with a host of associated and unassociated content when encoded into those words. Because the set of words ‘the wall is green’ is inevitably a crude abstraction of the actual mental state the sender wishes to reproduce in the listener. The encoding itself encodes, en passant, a raft of cultural and situational baggage. It exposes the sender as an English speaker, who uses the system of six primary colours, who is referring to a terrestrial object (otherwise, it would be the ‘bulkhead’), etc. The tone of voice, handwriting, etc., can contain a multitude. And the like.

    The actual transmission can best be seen only as a scrap – the barest hint, which will allow the receiver to build a complex mental picture, one which presumably accords with the one the sender had hoped to create.

    The received receives the sentence ‘the wall is green’ and decodes the ‘meaning’ of the sentence, which is a reference to a colour of a wall. This may or may not have been accompanied by some sensory experience or action (the turning on of a light bulb, say). These all, depending on all the other factors, cause a new mental state to emerge in the user’s mind. It may even be accompanied by some internal perceptions (such as mental talking to oneself).

    The receiver may think, on hearing the sentence, “he thinks I’m stupid.” It should be clear that the ‘content’ of the message, as received, may have little to do with the content of the message as sent. Moreover, the sender knows this. The sender may intentionally cause the receiver to receive the insult. The expression of the intent may be semantically unrelated to the intent itself (just as the swinging of a bat is semantically unrelated to the hitting of a home run – it is only when viewed from a particular perspective that one can conjoin the one as an expression of the intent to do the other).

    This isn’t unique, of course. J.L. Austin spoke of ‘speech acts’ decades ago. John Searle talks about ‘indirect’ or ‘illocutionary’ speech acts. Max Weber talks about ‘sense’ and ‘intention’.

    Wittgenstein’s doctrine that ‘meaning is use’ could be considered an ‘action theory of language’. Habermas talks about language as the vehicle for social action.

    And there may not be any specific intent (not even of externality) in the sender’s mind. “He talks just to hear the sound of his own voice.” A lot of communication is just verbal flatulence. It nonetheless has content, because it nonetheless has an effect on the listener (however minimal). The actual effect may have little, if anything, to do with the intended effect. Semantics is distinct from cause; the sender’s intention does not have causal powers, only his or her actions do (and intention underdetermines action, and action under-expresses intention). That said, we are sensitive as listeners to this intention, and have a means (mirror neurons, for example) of perceiving it.

    Language is the vehicle we use to extend ourselves into the world. It is what we use to express our intent, and hence to manifest our thoughts as external realities.

     

    Moncton, February 05, 2008

     

    ‌Patterns of Change

    Submitted to the Critical Literacies course blog, June 7, 2010.

    Change is with us every day. Life would not be possible without it. Change may seem chaotic and unpredictable, but most change occurs in patterns that we can see and recognize.

    This post isn’t an attempt to be the final word on patterns of change. Rather, it is an attempt to introduce the idea and encourage people to think systematically about it.

    Linear Change

    Think about a car driving along the highway. Its position is changing every minute, every second. If the driver stays at a constant speed, then its position changes at a steady pace.

    Driving at 60 mph, for example, the car will travel at one mile per minute. After one minute, it has travelled one mile. After 10 minutes, 10 miles. After 60 minutes – one hour – 60 miles.

    This is linear change. It is change that occurs at a static pace. If represented on a graph, it would look like this:

    Linear Change

    Notice that the graph is a straight line. That is why we call this linear change.

    There are many examples of linear change in your everyday life. For example, if water runs steadily from a tap, the pot fills up at a constant rate. Or for example, if a new brick is added to a wall every 30 seconds, then the wall will grow at a linear rate.

    A significant proportion of educational theory is based on some sort of linear change. Here’s an example from a blog210 I was reading today:

    Personal Social Learning Continuum – source: aLearning Blog

    It is typical to think of student progress, or learning progress, or some other sort of progress, as happening in a straight line based on some factor or another. But it would be misleading.

    Linear change is so common in our lives there is a temptation to think of all change as linear change. It’s very easy to be lulled into this.

    The stock market, for example, seems to rise at a fairly steady rate over a period of time. We come to expect this change, and to count on it. And then we’re surprised when it suddenly falls.

    Or, closer to home, the value of our house rises steadily, year after year. We come to expect this to continue indefinitely, and are not prepared for the day housing prices fall.

    Or, you are sliding down a hill. This feels a lot like driving or riding a bicycle, so you expect your speed to be constant. But all of a sudden, you are going much faster than you intended. Your rate of change has increased, catching you by surprise.

    Nothing lasts forever. Things that change at a steady pace may appear to be easy to predict, something you can count on, but eventually something changes – the road ends, your gas runs our, you hit a hill, the water stops running, something – and your linear change becomes something else.

    A linear change can change in two ways:

     

    Acceleration211 or speeding up – the change can speed up. Something that appeared to change constantly can start changing faster and faster. If you press on the gas while driving a car, for example, your speed will accelerate.

    Deceleration or slowing down – the change can slow down or even come to a stop. In extreme cases, it can even reverse. If you press on the brake (or hit a wall) while driving, your speed will decelerate.

    Acceleration and Deceleration

    In general, you can use linear change to make short term predictions, but because linear change tends to change, you need to watch for signs of acceleration or deceleration. Any time a course of action depends on constant, linear change you need to have contingency plans – or back-up plans – for sudden changes.

    That’s why we have seatbelts in cars; it’s a contingency, in case the car’s speed suddenly slows. That’s why we have blowout valves in oil wells; it’s a contingency in case the flow of oil suddenly increases. Many of the devices that are restraints or governors of some sort are contingencies, devices intended to deal with unexpected acceleration or deceleration.

     

    Exponential Change

    Sometimes a change keeps in changing. If you keep your foot pressed on the accelerator you go faster and faster, for example. When you are falling, you fall faster and faster. The rabbit population in your back yard grows faster and faster every day.

    This sort of change is called exponential change. It is change that does not progress at a steady rate, like linear change, but which occurs at a faster and faster rate.

    To picture exponential change, you can construct a simple mental model by imagining what happens when bacteria cells multiply. A single bacteria cell might divide into two cells once

    every 20 minutes, for example (this is actually how fast e coli multiplies). This is known as its doubling rate.

    So, after 20 minutes, we have 2 e coli cells. After 40 minutes, each of those has divided into two, and we have four e coli cells. After an hour, they have divided again, and we have eight e coli cells. In another hour, we have 64 cells. And so on. We’re not just adding e coli cells to the mix, we’re multiplying them, so the number of cells increases at a faster and faster rate.

    Here’s what it looks like on a graph:

    Exponential Change

    Today you read a lot of people write that we are experiencing a time of exponential change in our society. This is because the rate of change of different things seems to be happening more and more quickly.

    World population212, for example, has been increasing exponentially. World population was 1 billion in 1800, 2 billion in 1920, 3 billion in 1960 (the year after I was born), 4 billion in 1965,

    and 6 billion in 2000.

    The pace of technological change has also been exponential. Moore’s Law213 says that processor power will double once every 18 months. Because this is a multiplier we know that it produces exponential change.

    Because exponential change can grow so rapidly, we sometimes use a different type of graph to represent it. Graphed, the pace of technology change would look much like the pace of e coli growth depicted above. But this would make it very difficult to represent.

    So instead, we use that is called a logarithmic214 graph. Here’s a logarithmic graph of Moore’s Law:

    Moore’s Law – Source, Wikipedia

    Notice that on the left-hand axis (the Y-Axis, which runs up and down) we count the values not one by one but exponentially – 10, 100, 1000, 10000, and so on. In this type of graph, an exponential change looks like a straight line. This makes it easier for us to understand.

    Models of progression typically invoke either linear change or exponential change. Consider, for example, the development of society215 in human history. We progressed from the hunter- gatherer stage to agriculture to industrial and now an information-age society. The very concept of progress216 has, embedded in it, some notion of constant linear change, whether at a steady rate or an ever-increasing rate.

    There is a danger to this. As with static linear change, we can come to expect change to continue indefinitely. Consider, for example, the advancement of the stock market. This is what we saw in 2000:

    Dow Jones 1928-2000 – Source: Yahoo Finance

    This has a few bumps, but it’s pretty clearly an exponential change. It was on the basis of this long-term chart that investors were advices to “buy and hold” and “invest for the long term.” The fluctuations were minor compared to the overall trend. And so we based the economics of everything from mortgages to retirement accounts to business plans on this sort of long-term growth.

    But look at the same chart extended to 2010:

    Dow Jones 1928-2010 – Source: Yahoo Finance

    The exponential change has come to a dead halt. There was a crash after 9-11 and then another crash eight years later as the housing bubble burst. Overall, through the decade, there has been no growth in stock values at all. Other economic indicators have become similarly stagnant.

    Exponential change can look inevitable when you’re in the middle of it. But like linear change, there’s always the possibility that the acceleration will decrease and even reverse. When this

    happens, the results can be even more destructive, because we will have built systems based on constantly accelerating growth, not a steady state or even a decline.

    Parabolic Change

    There’s an old saying: what goes up much come down. This is a principle we can rely on in many circumstances. Throw a baseball into the air – it will rise higher and higher for a certain time, but eventually it will fall back to earth.

    This is parabolic change. It represents a situation that is limited in duration or extent, and where the changing factor will return to its origin. It looks like this on a graph.

    Parabolic Change

    There are many examples of parabolic change. The consumption of a limited resource, such as oil, is a good example. Consumption rises for a while as oil is found and refined. However, at a certain point in time – peak oil – the supply begins to fall, and as a result, our consumption of it begins to slow. Eventually, once all the oil is gone, consumption returns to zero.

    Another example – interestingly – is the human life. When we are born we have few capacities. Gradually we grow, and get stronger, more agile, and smarter. But this (despite the confidence of youth) does not continue indefinitely. As we age, we slow down, become weaker, and even lose of of our mental abilities. Finally we die, and our capacities return to what they were before we were born, to zero.

    Arnold Toynbee217 describes the arc of civilization in this way. Civilizations rise and fall, he writes218, in a constant and predictable way. They expand in (more or less) a circular fashion until they grow too large for their infrastructure to support. Then, because of this, they begin to decline. “Things fall apart; the centre cannot hold.”

    Not all such changes need to be a perfect parabola. Things can rise very slowly and fall very quickly – “It takes years to build a good reputation, and only seconds to destroy it” – and an arc can rise and drop sharply.In drama, we sometimes talks about the story arc, and this is typically a type of parabolic change, but is not a nice smooth progression. Consider this arc219 from Buffy the Vampire Slayer:

    Buffy the Vampire Slayer Dramatic Arc – Source: Match-Cut.org

    Arcs do not always have to return to their starting point either. Sometimes the rise and fall is itself a type of change. Consider this diagram, the Gartner Hype Cycle220:

    Gartner Hype Cycle – Source: Wikipedia

    What this diagram makes clear is that arcs can be positive or negative – they can create peaks or troughs. And, as mentioned, they can result in a higher end-point than starting point. As such, a change like this is – on the long run – effectively the same as a liner change. We could draw a straight line from the starting point to the end point. It’s the same result, even if the journey to get there was a little more exciting.

    Cycles

    “The more things change, the more they stay the same.” Sometimes it seems that, despite all the change in the world, things stay constant. It’s like being on a merry-go-round – you might travel a lot, but all you’ve done is to go around in circles.

    Cycles for a large part of many theories of change. “History repeats itself,” we are told. “Those who do not learn from history are condemned to repeat it.” From the perspective of a single civilization, there seems to be a rise and fall, but from the perspective of history, we see a succession of rise and fall, rise and fall – a great cycle of history.

    We can, in fact, think of cycles as being like a series of parabolas or arcs. They may be positive or negative, depending on how you look at them. Like this:

    Cycles – Source: Doctronics

    You may recognize this as a sine wave221. What a sine wave describes is the movement of a cycle. If you drew a chalk mark on a tire and rolled the tire, the sine wave would describe the motion of the chalk mark as it rotated around the axis, up and down, as the tire moved forward.

    Our lives are full of cycles. We breathe in and breathe out. Out heart beats at a regular pace. We go to work and return home again. We wake and we sleep.

    We can actually recognize cycles in sounds as well as by sight. All audio signals, in fact, are types of cycles. The sine wave depicted above, when implemented in electronics and broadcast though a speaker, becomes a musical note. Like this:

    Pitch – Source: Doctronics

    Play the sound associated with the wave form above: click here222

    The frequency is the number of times the cycle repeats in a second; the amplitude is how high and low each arc goes. In music, the frequency is the same as the pitch, and the amplitude is the same as the loudness.

    I wrote223 last week about Soundation Studio224 This is interesting because you can create your own types of waves to create different sounds. The sounds effects generator (the blue box, lower left) can be used to create different types of waves – sine waves, like we’ve seen above, sawtooth waves, square waves, noise, and more.

    The point here is that we as humans are very sensitive to cycles. We create them, we repeat them, we have evolved an entire science of mathematics, electronics and music based on the manipulation of cycles. We are very prone to see them in the environment, and to expect to see the cycle repeat itself after a time.

    And we are justified in this. Nature is filled with cycles, from the orbits of the planets to the rise and fall of the Sun to the flow of water through the ecosystem. Often, we draw the circle, instead of a sine wave, to represent some of these more complex cycles, such as the water cycle225.

    Water Cycle – Source: Environment Canada

    Cycles are like linear changes – it is very easy to become used to them, to become comfortable with them. It is natural to assume that cycles are inherent in nature, that they are an inescapable part of life. We see society move to the left, and see as natural a movement back to the more conservative right.

    While it is natural to think of a cycle as unending and unchanging, it would be a mistake. A cycle is a type of motion, whether it’s a tire on a car, sound waves produced by electronics, or the flow of water through an ecosystem. And there’s no such thing as perpetual motion. All motion requires some sort of impetus, some sort of energy to create and sustain it. Change the input, and you change the cycle.

     

    The Dialectic

    The concept of the dialectic has its origin in Hegel226 and is basically the idea that in a cycle there is a motion forward. Hegel introduced us to the concept of thesis and antithesis227 – which would be similar to the up and down of a chalk marking, or the back and forth between left wing and right wing in politics. These, together, produce what he called the synthesis, which is the product of their interaction.

    As van der Veen228 writes229, the dialectic “contains elements of both cyclical and linear change, and thus change is spiral; significant change takes place as an attempt to resolve the accumulation of intolerable contradictions, the unravelling of stresses that are inherent in social life; short term repetitive change but with long term cumulative directional change; processes of change persist but the contents of the processes are changing.”

    Here’s a representation of that process:

    The Dialectic

    This is the origin of the concept of the paradigm shift230. According to Thomas Kuhn231, science does not progress in a linear fashion, but rather progresses through a series of jumps, called paradigms. Within a paradigm we have what is called ‘normal science’, but eventually, contradictions, unexplained experimental results, and other problems and questions force the science into a crisis point. Through this crisis, our view of the world is revised, and we adopt new scientific theories, terms and concepts.

    Another way to depict the same process is to think of a series of parabolas – a cycle – creating a linear change. Like this:

    Viewed from a certain perspective, these aren’t cycles any more but spirals. There is a movement around and around, but it is headed in some direction. The cycle may be progressing upward, or it may be progressing downward.

    Stock market analysts have created mathematical models on forms of the dialectic to predict swings in share values. Here is an example232 called the Elliott Wave Principle233:

    Here’s another example. The author starts234 with a basic wave pattern of change, the forming- norming model that has become quite popular:

    These are then joined to created a full dialectic:

    This creates for us two distinct types of change, virtuous and vicious circles235. Wikipedia has pretty good examples of these:

    Virtuous Circle – “Economic growth can be seen as a virtuous circle. It might start with an exogenous factor like technological innovation. As people get familiar with the new technology, there could be learning curve effects and economies of scale. This could lead to reduced costs and improved production efficiencies. In a competitive market structure, this will probably result in lower average prices.”

    Vicious Circle – “Hyperinflation is a spiral of inflation which causes even higher inflation. The initial exogenous event might be a sudden large increase in international interest rates or a massive increase in government debt due to excessive spendings. Whatever the cause, the government could pay down some of its debt by printing more money (called monetizing the debt). This increase in the money supply could increase the level of inflation.”

    Virtuous and vicious circles are the result of feedback loops236. What happens is that the result of one cycle feeds into the next cycle, accelerating its effects. The change is not merely linear, it can be exponential. How this happens, and what causes it to happen, varies. Hegel thought it was the result of the world spirit. Marx thought it was the force of history.

    Today we explain such effects though principles such as the network effect237 or the first mover advantage238. Vicious and virtual cycles occur in interconnected networks, where we have not only a circle much a much more interconnected web of entities. The result from one cycle feeds into the next cycle. In a network, such effect can result in cascade effects239.

    A disease sweeping through a society, a virus spreading through a computer network, a fashion fad sweeping the nation, an idea, word or meme occupying everyone’s thoughts – these are examples of cascade effects. Everything can change, sometimes permanently, as a result of a cascade effect.

    Cascade effects can be wild, sudden, and hard to predict. We may think that we are in a normal cycle, while behind the things a change is gradually accelerating. Global warming is like that – we experience the warmth of the day, the coolness of night, and the warmth of summer and the coolness of winter, and even the effects of 11-year sunspot cycles, and 30-year climactic cycles. But hidden behind these cycles is a gradual and slowly accelerating increase in the overall temperature, global warming. If we aren’t looking for it, we won’t notice it at all – until it suddenly and catastrophically spirals out of control.

    Waves

    When we think of change as happening to a wide area at once, then instead of cycles we sometimes think of change as happening in waves.

    Probably the most famous example of this is Alvin Toffler’s book The Third Wave240. According to Toffler, “The First Wave is the settled agricultural society which prevailed in much of the world… The Second Wave Society is industrial and based on mass… (and) The Third Wave is Post-Industrial Society.”

    It is not always clear what someone means when they talk of a wave. Toffler’s waves, for example, have been depicted241 as a form of exponential change

    and as a type of242 dialectical change

    The way waves behave can inform us about what to expect from a change, though. Consider how the tsunami spread through the Indian Ocean in 2004243:

    Waves are not steady and linear. They interact with each other and with landforms around them. Understanding waves involves not only understanding how they propagate but also in understanding these interactions.

    Consider, for example, how the intersection of two waves244 can amplify or dampen the wave:

    Two waves at different frequencies – different pitches – applied on top of each other produce what is called a ‘beat note’. This is the result of them amplifying when they are in phase and cancelling each other out when they are out of phase.

    The same effect happens in the world at large. We sometimes talk about the “stars being aligned” or “the right things coming together”. The 60s “Summer of Love”245 is sometimes described in such terms, as it represented the coincidence of widely available drugs, including the invention of LSD, the sexual revolution, made possible by the birth control pill, and the creation of a new form of music.

    As you can easily see from the diagram, a confluence of factors can cause effects all out of proportion to what one might expect from the waves on their own.

     

    Drivers and Attractors

    One effect of the wave analogy is to represent change as something that is overwhelming and inevitable. No doubt this is part of the impression Toffler tried to convey with his title. The thought of change as something that cannot be resisted is a common theme in the literature.

    In a sense, it’s true. Change is inevitable. Without change, we would all be static, inert lumps of clay. Our lives and being depend on change. And change happens, every in the world, every minute of the day. As Isaac Asimov says246, “It is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be.”

    Maybe so. But as noted above, no change occurs by itself. All change is a type of motion, and all motion has some sort of impetus or cause. Change does not occur in isolation; something makes it happen.

    We sometimes represent these as drivers and attractors. These are a bit like push and pull. A driver is some force or energy behind the change, pushing it forward. An attractor is something in front of the change, pulling it forward.

    You see references to drivers in a lot of political and economic literature. Drivers are often depicted as external forces that push economic or social behaviour in a certain direction. Consider this diagram247, for example:

    Here we see three major drivers depicted: ICTs, globalization, and climate change. We see that these drivers are pushing us toward operational; efficiency, size and competitiveness, and sustainability.

    These drivers are depicted in a variety of ways. Here we have248 sort of a flow chart:

    Again, the use of drivers is as causes that almost force the outcome. It’s as though the authors are intending to say, “Given these forces in the world, we cannot help but to change in such and such a way.”

    Attractors are a bit different. Attractors are like gravity: they pull us toward some sort of goal or destination. While drivers seem to force us toward some sort of linear change, attractors seem to pull us in cycles. The spiral-based change typically revolves around an attractor.

    An attractor need not be physical, like gravity. It can also be an objective or goal. While such attractors can motivate change, they can’t really be said to cause change – they require human agency for that. Here’s an example249 of such an attractor:

    In this case, the attractor is that sweet spot at the intersection of programming, modelling and the semantic web. Whatever it is that’s in there is pulling in the programmer toward it over time.

    Here’s another example250, depicting development toward some military objective. This time the spiral goes up:

     

    Theories of change need to take into account the attractors as well as the drivers. Understanding what motivates people is as important as what urges and needs they have. An understanding of this would better inform educational theory.

    In education, people are thought to learn according to different learning styles251. A person might learn better by reading, listening, looking at pictures, or working with his or her hands. But studies of educational outcomes based on learning styles are inconclusive. There doesn’t seem to be an improvement in learning even if the teacher adapts to a student’s learning style.

    But in education, a student’s motivation252 is just as important. Teachers need to adapt not just how they push students toward learning, but how they attract them. A student has to be ready to learn, wanting to learn, and able to overcome the anxiety of learning. Different theories of motivation253 attempt to explain what attracts people to certain kinds of change.

    Design and Selection

    In many kinds of change, the result of the change is defined not simply by a process but also by a logic254. The changing image on your computer screen, for example, is not the result of natural forces, but because of a specific design.

    This is reflective of the impact choice255 has on change. At any moment in time you and about 6 billion other people – not to mention billions of other animals and insects – are making choices about what to do or say next. Should I finish writing the paper? Stay up late? Drink a beer?

    In computers, changes of state are represented by flow charts256. These charts describe the decisions the software makes – often based on user input – in order to produce a result. But flow charts need not only describe software decisions. They can describe human actions as well. For example, should you change the lamp?

    But how do people actually make decisions? In many cases, they are not rational – they do not compute results like a computer, but rather follow their own sometimes irrational beliefs and inclinations. A great deal of theory supposes that people are rational agents257 – and this supposition is often the cause of error. There are many types of rational behaviour258, and not all are instrumentalist or goal-directed.

    Moreover, not all choice is made by humans or rational agents. Animals, plants and even inanimate objects enter into points of decision. These choices may be bounded by the nature and situation of the chooser, but are in other cases quite random and impossible to predict. Will the deer on the highway veer right or left? Will the rock land on the road or roll off to the side.

    Will this uranium atom decay today or a dozen years from now?

    Genetics, evolution, and similar natural processes are the result of these factors. This is not the place to discuss these in detail. But it is important to take into account that these do not stay the same and that they evolve and adapt as a result of forces such as natural selection. Expecting the bacterium to stay the same, expecting the opposing football team to play the same – these would be mistakes, based on a failure to recognize the influence of adaptation.

    Finally, as suggested above, some changes are genuinely chaotic259 and random260. The outcome cannot be predicted – it depends on factors that may be too small to be measured or simply unknown to science. In such a case, the graph of the future is not a line, but rather, splits up to define a probability space. This is the classic diagram of chaotic change:

    Change progresses on a line for a period of time, then divides into two possibilities, then four, and then an almost infinite number. But note that even in a chaotic system, there is a range of possibilities. It’s like predicting the weather – we might not be able to predict it exactly, but we know it will be warmer in the summer and colder in the winter.

    Patterns of Change

    This has been an overview of different types of change. It is by no means a complete description of change. At best, it is an introduction.

    But the main intent of this post is not to describe and explain the different types of change. You can find more detailed and more authoritative treatments in mathematics texts, economics and business texts, and history texts. Indeed, almost any discipline will have its own treatment of change.

    The purpose of this article has been to make it clear that it is possible to think systematically about change, and that it is fairly easy to recognize different types of change. Almost every theory you encounter in any discipline will appeal to one of the theories of change described above. Knowing that these theories have properties – and strengths, and weaknesses – in common helps you understand them and to criticize them.

    Footnotes

    210 Ellen Behrens. It Doesn’t Have to Be That Hard. aLearning Blog. June 6, 2010. http://alearning.wordpress.com/2010/06/06/it-doesnt-have-to- be-that-hard/

    211 Wikipedia. Logarithmic Scale. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Logarithmic_scale

    212 Wikipedia. World Population. Accessed June 7, 2010. http://en.wikipedia.org/wiki/World_population

    213 Wikipedia. Moore’s Law. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Moore%27s_law

    214 Wikipedia. Logarithmic Scale. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Logarithmic_scale

    215 E. Wilma van der Veen. Patterns of Social Change. Social Change: SOC A405 (course website), University of Alaska, Anchorage. October 11, 2002 http://stmarys.ca/~evanderveen/wvdv/social_change/patterns_of_social_change.htm

    216 Wikipedia. Social Progress. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Social_progress

    217 Wikipedia. Arnold J. Toynbee. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Arnold_J._Toynbee

    218 Nobs. A Study of History. Website. October 17, 1993. http://nobsword.blogspot.ca/1993_10_17_nobsword_archive.html (update: 2012 – I have no idea what I was thinking with this reference. Here’s a proper reference: Arnold J. Toynbee. A Study of History. Oxford University Press, 1987 http://books.google.ca/books/about/A_Study_of_History.html )

    219 Mara. The Slayer’s Journey: Buffy as Monomythic Hero. Match Cut (weblog). April 8, 2008. http://match- cut.org/showthread.php?t=757&highlight=buffy

    220 Wikipedia. Hype Cycle. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Hype_cycle

    221 W.D. Phillips. Signals. Design Electronics (web book). Doctronics. 1998. http://www.doctronics.co.uk/signals.htm

    222 .wav sound. W.D. Phillips. Signals. Design Electronics (web book). Doctronics. 1998. www.doctronics.co.uk/sounds/250hz_l.wav

    223 Stephen Downes. Soundation Studio. May 31, 2010. OLDaily (weblog). http://www.downes.ca/post/52566

    224 Soundation Studio. Website. Accessed May 1, 2012. http://soundation.com/studio

    225 Water Cycle. Environment Canada. No longer extant.

    226 Wikipedia. Georg Wilhelm Friedrich Hegel. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Georg_Wilhelm_Friedrich_Hegel

    227 Wikipedia. Thesis, antithesis, synthesis. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Thesis,_antithesis,_synthesis

    228 E. Wilma van der Veen. Website. Accessed May 1, 2012. http://stmarys.ca/~evanderveen/wvdv/index.htm

    229 E. Wilma van der Veen. Patterns of Social Change. Social Change: SOC A405 (course website), University of Alaska, Anchorage. October 11, 2002 http://stmarys.ca/~evanderveen/wvdv/social_change/patterns_of_social_change.htm

    230 Wikipedia. Paradigm Shift. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Paradigm_shift

    231 Wikipedia. Thomas Kuhn. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Thomas_Kuhn

    232 ForexCycle. Elliott Wave. Forex Trading with Elliott Wave. April 19, 2008. http://www.forexcycle.com/elliott-wave/282-forex-trading-with- elliott-wave.html

    233 ForexCycle. Free Elliott Wave Tutorial from Elliott Wave International. Undated. http://www.forexcycle.com/elliott-wave-tutorial.html

    234 David McNamee, Thomas McNamee. The transformation of internal auditing. Managerial Auditing Journal, Vol. 10 Iss: 2, pp.34 – 37. 1995. http://www.emeraldinsight.com/journals.htm?articleid=868232&show=html

    235 Wikipedia. Virtuous circle and vicious circle. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Virtuous_circle_and_vicious_circle

    236 Wikipedia. Feedback. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Feedback

    237 Wikipedia. Network Effect. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Network_effect

    238 Wikipedia. First Mover Advantage. Accessed June 7, 2010. http://en.wikipedia.org/wiki/First-mover_advantage

    239 Wikipedia. Cascade Effect. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Cascade_effect

    240 Wikipedia. The Third Wave (book). Accessed June 7, 2010. http://en.wikipedia.org/wiki/The_Third_Wave_%28book%29

    241 Konrad M. Kressley. Riding the Third Wave. Harbinger (magazine). December 7, 2007. http://www.theharbinger.org/xvi/971209/future.html

    242 R.K. Elliott. The third wave breaks on the shores of accounting. Accounting Horizons (June): 61-85. 1992. http://maaw.info/ArticleSummaries/ArtSumElliott92.htm

    243 Wikipedia. 2004 Indian Ocean earthquake and tsunami. Accessed June 7, 2010. http://en.wikipedia.org/wiki/2004_Indian_Ocean_earthquake

    244 Allen Watson III. More About Nyquist. Web page. 2008. http://www.aw3rd.us/audif/moreNyquist.htm

    245 Wikipedia. Summer of Love. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Summer_of_Love

    246 Richard Derwent Cooke. The Inevitability of Change. I-Change (weblog). July 30, 2009. http://www.i-change.biz/blog/?p=2594

    247 Alagse. The next global driver of change. Promoting Thought Leadership… (website). Undated. http://www.alagse.com/cm/cm2.php

    248 Gecafs. Why has GECAFS taken a “food systems” approach? Undated. http://www.gecafs.org/research/food_system.html

    249 Peter Hale. University of the West of England Home Page – Faculty Online Data (FOLD) http://www.cems.uwe.ac.uk/~phale/

    250 Mitre. Website note responding. http://www.mitre.org/news/the_edge/july_01/jackson2.html

    251 Wikipedia. Learning Styles. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Learning_styles

    252 Original link no longer extant. http://honolulu.hawaii.edu/intranet/committees/FacDevCom/guidebk/teachtip/motivate.htm

    253 Wikipedia. Maslow’s hierarchy of needs. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Maslow%27s_hierarchy_of_needs

    254 Wikipedia. Logic. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Logic

    255 Wikipedia. Choice. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Choice

    256 Wikipedia. Flow Chart. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Flowchart

    257 Wikipedia. Rational Agents. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Rational_agent

    258 Milan Zafirovski. Human Rational Behavior and Economic Rationality. Electronic Journal of Sociology, Volume 7, Number 2. 2003. http://www.sociology.org/content/vol7.2/02_zafirovski.html

    259 Wikipedia. Chaos Theory. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Chaos_theory

    260 Wikipedia. Randomness. Accessed June 7, 2010. http://en.wikipedia.org/wiki/Randomness

    Moncton, June 7, 2010

     

    ‌Speaking in LOLcats

    Transcribed text of an address to the Educational Computing Organization of Ontario, Richmond Hill, Ontario, November 12, 2009. Audio and slides available at http://www.downes.ca/presentation/232

    Thanks everyone, and people online you should be hearing me OK, if not just say something in the chat area. Your chat area is being viewed by an audience here in Richmond Hill, which I had never really heard of before I came here, so I’m kind of… that’s good Christina, excellent.

    Now some of the people in the room may be joining you and anything you say in the chat area can be viewed by people in the room. We’re recording the Elluminate session – I’m not going to press my luck and try for video. Elluminate does support video and it supports it rather well, but I have to be standing right here the entire time because it would use the iSight monitor (I don’t have a video camera hooked up) and I don’t want to do the entire presentation like this, so I won’t.

    This is the second of two presentations today, and as I said in the first presentation, when you do two presentations in a day, there’s a good one and a bad one – this is the good one. It’s a fairly sweeping and ambitious presentation. It’s probably not the sort of presentation you’ll see in any of the other sessions. I’m trying to go someplace a bit different.

    It’s the first time I’ve tried this material, though I’ve tried bits and pieces of it, and it’s something I’ve been thinking about for a long time. And basically the title sort of speaks for itself, Speaking in LOLcats, What Literacy Means in teh Digital Era, and no, that’s not a typo, and the poor conference organizers trying to preserve that spelling through editing, [voice: I corrected it about six times] corrected about six times. This nice cat, this is my cat Bart by the way, and yes, he is the intellectual that he looks.

    All right, so, let’s roll. Let’s look at some LOLcats. Because, when you put LOLcats in the title you kind of have an obligation to put tome LOLcats in the presentation. So, here’s your classical (I wish this monitor were bigger) wait for it wait for it…

    How many of you are familiar with LOLcats? Oh wow, this is new. If you go to the website Icanhascheezeburger.com261 – the link is there on the slides; all the slides will be available on my website this evening, downes.ca262 – and you’ll see hundreds and hundreds of images like this. Now what a LOLcat is, it’s an image usually not but necessarily consisting of a cat, with some kind of funny phrase. Now the funniness varies and your reaction to the funniness will vary, that’s ok, they don’t have to be brilliant, although some of them are just a scream.

    I thought this one was pretty good:

    263

    Now the thing with LOLcats is, they’re not just pictures of cats with some text. You look at this one, you can’t possibly read that (on the small screen), but, “Love,” it says, “Nothing says ‘I love you’ like a paw in the eye.”

    264

    But of course this LOLcat is hitting you on a few levels: the paw in the eye is the obvious slapstick, but you will all recognize this form, the black border and the picture and the inspirational slogan. That form has been parodied like crazy; I just love some of the parodies. And of course that form is being used here in this particular LOLcat.

    That’s typical of a LOLcat. It will pull some cultural association in on it. If you look at the previous one that we just looked at, when I said “wait for it, wait for it…” you can almost hear that “wait for it, wait for it…” in your mind. There’s a cultural context there. There’s this thing that you’ve heard and seen before that is being applied to this funny picture; that’s what gives it the humour.

    Here’s another one:

    265

    “Garfeild and John: the later years.” (The lighting is so awful [voice: it’s terrible], I wish we could kill – is there a way of killing those front lights? [voice: Not in this room. I going to go do that] OK – I’m sorry that we missed… it’s completely washing out the picture. I thought I was so smart, because the morning presentation is all white, and then the afternoon presentation is all black, the two sides, the yin and yang, and all of that was to be silently in the background, so you’d feel that balance, but you wouldn’t really notice why you felt that bal… well now I’ve given it all away.)

    Anyhow. So here’s Garfeild and John: the later years. And, you know, it’s an old man and his cat. And so the LOLcat grabs things out of popular culture, but not necessarily popular culture, just culture. In this case it’s popular culture, it’s Garfield and Jon, right?266 ‘Jon’ of course is misspelled. Actually so is ‘Garfield’. (Again, it’s kind of washed out. That’s too bad. I wonder if… can I give you a bigger picture? It’s just these things are so funny. Yeah… and the problem is, if I do this I can’t advance the slide. [voice: I can advance the slide] You’ll advance the slides in the Elluminate? [voice: you bet] All right. Excellent. Thank you. There, it’s still kind of washed out, look.)

    “I triangulatered…”

    267

    Anyhow, it’s kind of funny, because first of all it’s a made-up word, and secondly, how often do you see cats in the shape of a triangle? But of course there’s a bit of fun there going on because ‘triangulated’, ‘triangularated’, … you know, you’re supposed to see that association, and maybe not, maybe the person just didn’t know how to spell ‘triangulated’. The beauty of LOLcats is, you don’t know. So you have to bring a lot of the humour to it.

    The spelling mistakes in LOLcats are very interesting because LOLcats are very badly spelled, typically, but in a predictable way. So, here we go:

    268

    “Oh noes. Dis watr is wet!” Again, the spelling has a characteristic – it’s a characteristic spelling, it’s a characteristic syntax, it’s kind of a mockery of txtspeak (text speak), it’s kind of a mockery of l337speak (‘leet speak’, or elite speak).

    Txtspeak of course is the spelling people use when they’re texting messages in SMS, l337speak is kind of a gibbled way of spelling hackers use to communicate with each other because – well, I don’t know, it’s not like nobody understands it, everybody understands it, but they use numbers for letters, threes instead of Es, and you get that from typing in your passwords oh-so cleverly. So your password is ‘JasonAlexander’, that’s really easy, but then you put ampersands in for all the As, so you get ‘J@s0n…’, so anyhow, that’s how you remember your password.

    And of course they like to mock stereotypes, and the stereotype is the cat in the water (i.e., cats don’t like water). There’s another one with a very similar theme, “I’m in your tub mocking your stereotypes,” is another phrasing for this particular sort of image.

    Don’t you love these? Can you see how these would become addictive? They’re just incredible, I can’t get over them. What really really really attracts me to the internet is that it is just chock- full of stuff like this. Forget about the Open Educational Resources. Forget about online learning. Forget about Twitter and social networking. The internet is full of stuff like this, and it’s the stuff like this that just kills, and makes the internet what it is. That’s why I love it.

    And that’s, really, what I’m talking about today.

    269

    Of course cats don’t actually drink coffee. That’s the joke, right? But of course, that’s not really a cat, that’s us on the horrible horrible day we tried to drink decaf instead of caffeinated coffee thinking it would be good for us. Ha! What were we thinking?

    LOLcats are a commentary on everyday life. The important word there is, they are a commentary. ([Listening to sound from outside] Fourteen. Let’s see if he keeps counting. Yeah, I stop, he stops. I’ll start talking… yeah, hrm. It is pretty funny.)

    Now like I said, the internet’s full of stuff like this. The internet – new media, properly so-called – this is the first thesis of this talk – new media constitutes a vocabulary. Or more accurately, perhaps ([To person outside] Hi, could you please stop counting dishes while I’m… [voice: oh I’m sorry]. He actually was counting… actually I think he was checking to make sure they were all clean. He looked like a manager. That’s the degree of service – I like the fact that somebody was checking to make sure all the plates were clean… less enthused about where he was doing it – you go to where the plates are, right?)

    OK. New media constitutes a vocabulary, constitutes a language, and that when people create artifacts they are quite literally speaking in LOLcats. Or, they might be speaking in Joan Harris cutout dolls:

    270

     

    You notice the same form, though, right? Commentary, popular culture… you know, it’s not just LOLcats people are using to speak online using new media.

    OK, you don’t like that one. Maybe you’ve seen this one:

    271

    This is a YouTube meme and I’ve caught it sort of near the beginning there; it’s a screen shot, not a real video (I didn’t think I could pull off a real video streaming in here and on Elluminate all at the same time, so I put a link to it if you’re curious). If you haven’t seen it – how many of you have seen this? Oh man, you’ve got to get more plugged in! You’re not speaking the language! If you go back to the school, whenever you go back, Monday, if you don’t know this you’ve missed probably the big cultural event, online anyways.

    Anyhow, there are two U.S. soccer players and the one in the red is grabbing the hair of the one in the white and as the stream goes you’ll see her pull her by the hair and drop her to the ground, and the video consists of her doing similar sorts of acts through the course of a soccer game. This video has gone viral – I’m sure you’ve heard the expression “to go viral” – it’s got millions of hits. And the question here isn’t “what is in the video?” – because what is in the video is what I’ve just described, it’s neither here nor there, it’s some nasty play on the soccer pitch – but people consider this worth posting, embedding in their blogs, sharing, and the question is, “what are they saying?” For they certainly are – are they not? – saying something.

    And I look at that and I say something like, this is people expressing a belief or a statement that this kind of behavior is unacceptable. That’s my take on it. Of course, you know, different people, different messages, your mileage may vary. And you bring a lot to this yourself.

    How many of you have heard of XKCD? Ah – more! These are a scream too. I love these. This one – there’s no way you can read the writing so I’ll do it:

     

    272

    So the guy’s saying “I’m locked out and trying to get my roommate to let me in. First I tried her cell phone, but it’s off. Then I tried IRC, but she’s not online.” IRC’s an old-style chat system. “I couldn’t find anything to throw at her window.” Of course. “So I SSHed into the Mac Mini in the living room and got the speech synth to yell at her for me. But I think I left the volume way down so I’m reading the OSX docs to learn how to set the volume via command line.” And then the other person says, “Ah. I take it the doorbell doesn’t work.” Well – you have to read it – it’s a lot funnier when you read it.

    273

    Now of course it’s funny but the artist is using a cartoon – in the grand tradition of cartoonists – to say something that’s not actually being said in the cartoon.

    Gaping Void’s another one – I’m less enthused with Gaping Void, probably because I’m less enthused with the overall message a hundred of his cartoons have combined to produce, but then a lot of people like it. “The price of being a sheep is boredom,” he writes. “The price of being a wolf is loneliness. Choose one of the other with great care.” I picked this one because this was the one that was kicking around when I was looking for a Gaping Void. But again the same sort of question comes up: what do you suppose the artist is saying? And again, people share these things, and we ask, what do you suppose people are saying when they share these things?

    How many of you have seen 9-11 Tourist Guy?

    274

    See…? You’re missing the best part of the internet! If I have done nothing else for you guys here today I have pointed you to the existence of an entire internet that is not the internet that you know but that is the real internet. Because the internet isn’t Twitter and Flickr and Facebook and all these. The real internet is all of this kind of silly stuff.

    Now of course this is just a picture of some tourist, we know not who, and of course he’s on top of the World Trade Center, and some wag has come along and photoshopped an airplane into it. And of course we have the picture, and then we have the backstory, “this camera was found in the rubble, blah blah blah, blah,” and of course it’s completely made up, because the shadows are all wrong, and there’s a whole site devoted to explaining why it’s completely made up because all the shadows are wrong.

    But anyhow, you have Tourist Guy, and then you have Tourist Guy in front of the Hindenberg:

    275

     

    And if you follow that link down there (http://urbanlegends.about.com/library/blphoto-wtc.htm) you’ll find this tourist guy all over the place. The sinking of the Titanic. The eruption of Pompeii. The scene from Independence Day where they blow up the White House (I love that scene).

    And so on. And there’s this who kind of theme coming out here, isn’t there?

    And if you think about it, again, there’s a language being used, people are saying something, there’s a structure to these things (you note they preserve the guy exactly), there’s a logic to the images behind, and in fact it’s a logic that people understand. It’s not an accidental logic. People understand this logic. Enough, so that in light of recent events, they can actually point out this logic to people who don’t know:

    276

    And what this says is, “Yo, Imma let you finish, but Pearl Harbor was the greatest attack on America ever.” Now you see what’s happening here. The logic is, of course, you have spectacular disasters, and Tourist Guy now is a way of saying 9-11 belongs to that category of events. And now, we’re taking that, and parodying that with a recent Kanye West interrupting (oh I’ve forgotten her name) [voices: Taylor Swift} Taylor Swift, yeah – you all know Taylor Swift [laughter]. It’s interesting, you all know Taylor Swift, you don’t know about the soccer player [voice: we don’t know Taylor Swift but we know this event, the interruption]. Yeah. The best description I heard of it, “it was like stomping on a kitten.”

    This is analogy, metaphor, and I’m sure out there somewhere there’s a picture of Kanye West stomping on a kitten. Again, you see, you follow the logic through. I’ve tried to find it. Go ahead. [Voice: most of these are made right after, the day after?] Oh yeah, in the Language Log discussion area as well277. He said “Yo. Imma let you finish.” Now he means “I’m going to let you finish.” “Imma let you finish…” – who speaks like that? And, well, nobody, it’s just kind of the way it came out. But yeah, you’re quite right, there was a whole site, and you’ve got Kanye West interrupting, well, “Yo, this is the best constitution in the world,” etc.

    Again, there’s a style, there’s a structure, there’s a meaning, there’s an intent, and people understand this, the people who create these things understand this. To more or less a degree. Again, some of the people making these things… ([Sound: beep beep beep] Something’s backing up [laughter] it just kills me… never mind. It’s funny more than anything because, you know, if I was a professional speaker this wouldn’t distract me at all.)

    All right. This first thesis is totally intended to be taken one hundred percent literally. I am not expressing a metaphor. New media is a language. The artifacts – the LOLcats, the images, the picture of a virus, are words.

    And this is not surprising and this is not unusual and this is not a concept that we can’t wrap our head around because we – those of us who grew up before LOLcats, and therefore speak in different words – understand this. We grew up learning about, for example, body language.

    Body language is a language. There are different things that are said. We can understand body language. If we get good at it we can actually speak in body language. Right? We can do it.

    Some of us do it better than others.

    278

    279

    Clothing, uniforms, flags, drapes, there’s all kinds of ways we use non-linguistic artifacts in order to express ourselves. And these non-linguistic artifacts have the properties of a language: grammar and syntax and all the rest of it. And we use them for the same purpose, to express ourselves.

    Now, the expressions are non-linguistic. We don’t know – we can’t convert ‘that dress’ into a set of sentences. And it would be ridiculous for us to try. The best we can get is kind of a rough approximation. But we would certainly be wrong, or at the very least misleading, to say “oh they mean nothing by their dress.” Of course they do! And every person in this room means something by the way they’re dressed, and if we look at the different people and the different ways they’re dressed, they’re all saying different things. Now I won’t point people out [nervous laughter] but you can all look around and… and you know, it’s all in Carnegie’s How to Make Friends and Influence People,280 there’s a whole section on dress in there, etc. And you say things with how you dress. And if you don’t believe me, go watch What Not To Wear.281

    How many of you watch What Not to Wear? That’s it? Well – isn’t that odd. Maybe it’s just me. I don’t follow any of the advice, but I watch it.

    Another type of language, and again, it’s a language that we’re familiar with, and a language we use more or less well, is the language of maps, diagrams, graphics, etc. And here, this kind of language (it’s sort of hard to see again), it’s a fairly well-known graphic, it’s the social network space expressed as a map, so you have the Gulf of YouTube, the Plains of MySpace, the Ocean of Subculture, little islands here, the big Wikipedia project, the blogpegalo, Noob Sea [voice: it’s cold where Windows Live is there] it’s cold where Windows Live is there? Where is Windows Live? Oh way up there, the icy north. Well, yeah, you go there, it’s frozen. “Here be anthropomorphic dragons.”

    282

    I love how Orkut and Live Journal and Facebook and Xanga, they all look like Southeast Asia. The Vietnam people, yeah. There’s probably messages in there, messages in there the author had no intent of making, but again, you know, it’s a language, it’s a syntax, etc.

    The second thesis I would like to propose, because we’ve had some fun and now I want to get obscure, because it’s not one of my talks – at least, not one of my good talks – if it’s not obscure a little bit: we can understand these languages within a logical-semiotic framework. that’s going to take a little bit of explaining.

    What I mean by that is that we can understand the language of LOLcats, the language of new media, with a framework that describes what we are saying, how we are saying it, how we come to know, how we come to believe things, in these languages. So the second part, this second part, of the presentation is intended to present that.

    Now the same sort of thing, the same sort of framework, that underlies our languages (and this is part of the second thesis) also underlies information theory.

    Now this diagram is a diagram of how light signals go through all this whole process and get to our visual cortex:

    283

     

    It’s a way of understanding sensory perception as a type of information theory. There’s a whole literature devoted to that, and I refer to Knowledge and the Flow of Information by Fred Dretske.284 But again, it’s the same kind of thing. If we go to the previous slide and we see this ‘sender’, ‘sign vehicle’, ‘immediate object’, ‘recipient’, that flow of communication, we have that flow of information, the same sort of thing underlies inference and belief.

    285

    Now this picture, this is – I went to Australia in 2004 and I went there specifically in search of insights, and I got a few, which is really cool, because you pay to go all the way to Australia in search of insights and not get any. One of the ones I got was in a place called Kakadu.

    Kakadu is a park at the north end of Australia, it’s near Darwin, and there are these cave paintings on it from the Aboriginals ten thousand years ago or whatever. And what’s interesting about these cave paintings is, they’re very detailed. This is a fish, but if you look at it closely the fish has been cut open and these are fish guts.

    And you might be thinking, well, why do Aboriginals have pictures of fish guts on cave walls? And the answer of course is, this is how they’re communicating what they know about fish, and what parts of the fish to eat, and what you’re going to find inside a fish if you cut it in half.

    There’s a whole set of information transmitted. How do we know that? We study the symbol, the culture, we study the signs, and we make an inference.

    And this is the same process that they go through, that the Aboriginals go through, in understanding their surroundings, and this is the same process that empirical scientists go through when they study the world around us. Science can be thought of, inference can be thought of, as a language as well, the language in which the participants in the conversation are yourself and the world. And just as the language has signs and symbols and underlying meanings and all of that, so does the world. And it’s the same kind of thing, the same kind of flow, that gets us from observations to theory to what we would call scientific fact.

    What I’m trying to say here is – as it says up there – science can be seen as language, learning as conversation, and knowledge as inference. These are all different ways of doing the same thing, and the medium in which we do this is the language of this new media, the language of LOLcats. And other things, because obviously scientists do not do their inferring in LOLcats.

    Although – they could. And that would be interesting.

    What this means is we need to get beyond a fairly narrow language-based way of looking at not just communication (although certainly communication) but also this language-based way of looking at learning, this language-based way of looking at science.286 Or thinking, or reasoning.

    How many of you saw ‘Leave Britney Alone’?

    287

    The same. Right. Again, it’s more of this culture. And the thing is, all he says is “Leave Britney alone, leave Britney alone” – it’s very repetitive – but of course there so much more in there, there’s so much more being said in that video than just the words. And then of course, again, 25 million views, 25.5 million views, and the guy is just wailing. And the funny thing is, if I’m to judge by looking at the popular culture, it appears to have been successful. I think they are actually leaving Britney alone now. Not as much as he would like, but…

    So, what sort of conceptions (should we move beyond)?

    1. Well, the conceptions like “messages have a sender and a receiver.” I sometimes think of that as ‘the teleological theory of communication’, the idea that communications have to come from someone, have to be directed to someone – sometimes there is no receiver, or no intended receiver.
    2. Another (if you will) ‘folk’ linguistic theory, or ‘folk’ psychosemantic theory: “words get meaning from what they represent.” To a large degree, when we look at this we see this is just simply not the case.
    3. “Truth is based on the real world.” Again, this is one of these language-based things that we need to let go of. If you examine closely what ‘truth’ is, truth is a property of a proposition. So the question as to “what truth is” is the question “what makes that proposition true?” What makes that sentence ‘true’? but sentences do not need to refer to things that are in the real world, and yet can still be true. I’ll give you a classic example from Bertrand Russell: brakeless trains are dangerous. What makes that sentence true? You all agree it’s true, right? But what makes it true? Certainly not the fact that there are brakeless trains that are dangerous. Because, in fact, there are no brakeless trains. The reason for that is, they’re dangerous. We don’t make them without brakes. So how do we know… and you can see we now go into a whole song and dance about what makes a statement like that, a counterfactual, true.288
    4. Another conception that we need to throw away: “Events have a cause and these causes can be known.” That’s one of the most fundamental principles I think of common knowledge and common culture, but it’s a principle that’s deeply embedded in the linguistic origins of its original statement. And the reason why you can get a general statement like that is because language289 is artificially precise and artificially general.
    5. “Science is based on forming and testing hypotheses.” That’s the old deductive- nomological picture of science which you probably learned in science class – I won’t speculate how many years ago – formulated by a guy called Cark Hempel and completely destroyed by people like Imre Lakatos and Larry Lauden and Thomas Kuhn and others in the 70s and 80s, around the time we were (some of us) getting out of science.

    These pictures – these things, and others, there’s a whole set of them – taken together constitute a world view, constitute a way of thinking about thinking and learning as static, linear (and) coherent. The world… (now they applauded, what time is it? [voice: you still have 20 minutes left, it’s 25 after] so somebody really shot, really missed their… that’s why I always worry about, what am I going to do if I… well, anyhow – I am way too easily distracted to be a speaker) – so it’s this picture of the world as though it were text-based language, a picture of the world as though it were a book, a library. When we look at this logico-semiotic picture we see that we can see the world in many more ways.

    So here’s the frame:

    with a little Charles Morris, Foundations for a Theory of Signs,290 a little Derrida and a little Lao Tzu. This is a bit arbitrary, it’s an exercise in categorization, and like all exercises in categorization it is therefore an exercise in fiction. But it’s also a picture of knowledge that I think is knowledge as conversation, knowledge as interactions in these languages. It’s a bit more sophisticated a way of looking at learning and discovery and inference and the rest than say Bloom’s taxonomy or whatever.

    So, these are the six major elements: syntax, semantics, and pragmatics; that comes from Charles Morris. Cognition, that comes from a whole host of people. Context, Derrida, and change, from Lao Tzu with a little Marshall McLuhan. Each one of these gives us a whole set of questions that we can ask about new media.

    Take a look at the syntax, for example. Syntax – we grew up thinking of syntax as rules. But syntax is really about how we organize, how we construct, how we create our creations.

    Remember the LOLcats, you know, there are unspoken rules for their construction, remember, the reference to culture, the way of spelling your words, these are all aspects of syntax. There are different ways of expressing syntax. Rules – a grammar, or a logical syntax – is just one way of doing it. But syntax can be expressed through archetypes. For those of you who like psychology, Carl Jung has your archetypes there, but archetypes can be thought of more broadly.

    Archetypes can be thought of in terms of paradigms, archetypes can be thought of in terms of – I’m looking for another word, I’m looking for another word that means paradigms, but I don’t need to say it, because I can’t think of it. Or, they can be Platonic forms, they can be the ideal triangle, they can be the ideal circle, they can be the elements of Euclidean geometry. Etc. The medieval got very hung up on this, looking for the archetypical colour ‘red’ – ‘which must exist somewhere and can’t simply exist on somebody’s shirt or on a flag or whatever.’ So their whole thing up – Medieval philosophy is very strange.

    There’s a whole school of thought that looks at syntax in terms of operations. There’s a whole school of mathematical logic called ‘operationalism’ where mathematics is thought of not as quantities and things like that but rather things that we do, ways that we make things happen. Of course, these can be extended to motor skills.

    There’s a diagram I didn’t put into this slide where there’s a whole chart of different motor skills, and all these motor skills correspond to actions on your computer screen, and there are characteristic things: a Windows user expects when they go like that [gesturing] and click, that the program will close. That’s the motor skill and an expected outcome. That’s a syntax. It’s a syntax composed of an operation. Every application has a syntax. Understanding that every application has a syntax helps us understand applications.

    And there’s even a site out there – I saw it just a few days ago – that looks at (what was it?) sizing bars in Adobe software. Now you all know sizing bars. A sizing bar is a little bar with a little arrow and you click and hold on the arrow and you move up and down and that makes things bigger and smaller. It’s an operation, we all understand that, a simple rule framework. But you look at Adobe’s and in a single Adobe application – I think it was Photoshop Elements they were referring to – there were no fewer than six different ways of presenting sizing bars. This is a total syntactic fail, because if you want the same result in an application, you should have the same syntax. But they’re all different. They look different, they have different information, they’re shaped differently, etc.

    Those of you who are in tune will have notice the way I expressed that. “Total syntax fail.” Look up in Google something called “Fail Blog”.291 Fail blog is a blog that looks at syntactic fails and lables them “Fail!” It’s a commercial enterprise and so what they’ve done very cleverly is to set up this blog and then create – I don’t want to say a ‘meme’ because it’s too soft to be a meme – but to create this kind of language form where people point to something really stupid and say “Fail!” So, “This is a syntax fail.” So every time someone says “this is a syntax fail” it’s actually an advertisement for Fail Blog.

    Syntax at work. Isn’t syntax wonderful?

    This is – oh well, predicting my conclusion, but – this is new media literacy. Understanding that little kind of logic. Patterns. Regularities. Substitutivity. Eggcorns. I’ve got a slide later on about eggcorns. Eggcorns are wonderful. And I’m almost certainly not going to get to it, but – what an eggcorn is is a substitution of a word – so you put in the wrong word in place of the right word, but the wrong word kind of makes sense. It comes from using the word ‘egg corns’ instead of ‘acorns’. ‘Egg corn’ – it’s kind of, yeah, you know, yeah, they look like corn, and it’s an egg, for an oak tree, so ok, yeah, you get that.

    You know, there’s a – what’s my favorite? – “on a different tact.” t-a-c-t. Now of course the expression is “on a different tack” – t-a-c-k – and it’s derived from sailing, you maneuver against the wind by tacking, you tack back and forth (I learned to sail once). But people say ‘tact’ as in ‘tactic’ and that kind of makes sense, but it’s not the right expression. Anyhow, there’s hundreds of them, and somebody out there has collected them and is still collecting them.292 Because on the real internet people do stuff like that.

    Similarities. I won’t go into it. Oh, tropes, there’s a site out there, Television Tropes.293 Or Television and Movie Tropes, I forget exactly what the phrasing is, I couldn’t find the blog (well I didn’t look for it, I didn’t have time) but there’s a site that collects and categorizes every known television trope, and a trope is a characteristic plot or schema that drives a television show, and we all know the television tropes. I’m not going to try to think of one off-hand. But you know, the classic “Somebody said something that isn’t quite true to protect somebody’s feelings, the lie magnifies, the person is finally caught in a contradiction, and the show ends with his humiliating confession and everybody hugging because everybody knew it was…” That’s the plot of every single Three’s Company294 show. Ever. Or “they finally find a way to get off the island but Gilligan does something stupid and at the end of the show they’re stuck on the island.” But there are tropes that are used over and over and over and over again.

    Similarities, analogies, metaphors. There are rules, mechanisms, syntax for creating these. There’s not just “blank as a blank”. What is it that creates a similarity? A similarity is the set of – is the having of a set of properties in common. But not just any set of properties in common.

    They need to be properties that are relevant or salient. Having the right set of properties in common at the right time in such a way that these properties have an impact on the situation at hand. That’s syntax. And that underlies similarity. And that’s how we make metaphor work, and that’s how a lot of these LOLcats work. They work through similarity.

    Semantics. Is the second component of the frame. Semantics consists of theories of truth, meaning, purpose, goal. It’s how we make sense out of things. This is the thing that everybody’s keyed in on, so I’m not even going to linger too long on it, because everybody’s all about sense- making. It’s really a very small part of it. But everybody’s on about sense-making. The sense, the reference – of a term or a proposition, the denotation of it, the connotation, or the implication, the thing that you’re supposed to think about through some sort of process of association, and as I said, there’s a whole syntax for that.

    Or, another way of looking at semantics is through what may be called ‘interpretations.’ Probability theory, for example. You’re all familiar with probability, right? “The probability that the Sun will go down today is, what, one!” OK, well, what did I mean when I said that? What I didn’t mean is “The Sun will go down,” because I cannot refer to a future event because it hasn’t happened yet and therefore there is no referent for my reference. So, I must have meant something though, I didn’t just utter empty words. Well, there are different interpretations of what I said.

    Carnap, Rudolf Carnap for example, represents probability as expressing the number of instances in the logical set of all possible worlds – so, he doesn’t say worlds – all possible descriptions, the number that corresponds with the description I have.295 So basically Carnap divides the world into all the logical possibilities, your sentence is true in ‘that many’ of them, that’s probability.

    Hans Reichenbach, on the other hand, being more of a realist realizes we cannot know what the entire set of possibilities is, and so he gives us an interpretation based on frequency.296 It’s the old interpretation that we’re pretty familiar with from inductive theory. “It happened, it happened, it happened, it happened, it happened, it happened a hundred times, therefore, year, it will probably happen again.”

    Frank Ramsay – the radical – says probability has utterly nothing to do with any of that. The probability that something is true is measured in terms of how much money you would bet on it. [laughter] It’s true! And you think this is -–this is why I’m expressing it, I want to stretch you out of your frames – there are people out there in the world, mostly involved in money markets, who define truth in exactly that way. That’s what they believe truth is. It’s what people will wager, it’s what people will bet, it’s what people will spend, that is – you know, markets determine all truths.

    So there are different ways of looking at these things. So that’s probability.

    Association – I talked about association the logical structure, the semantical structure consists of the different ways of associating things, from similarities through contiguity (or being next to each other) though back-propagation (or feedback) or through Boltzmann mechanisms (association through harmony – “these things are associated because the world is just more harmonious if they are associated”). I’m glossing over something obviously that’s a bit more complex.

    Decisions, decision theory, voting, consensus, emergence – all of these are ways of getting at not just truth (although there’s certainly ways of getting at truth) but also meaning, also purpose and goals.

    All of these ways of looking at the world, of coming up with ways – ah, I hate starting a sentence without knowing how it’s going to end, because sometimes it never ends – all of these are ways of coming up with meaning, truth, etc., in the world. And different languages, different statements in language, and different artifacts will express truth, meaning, etc., in different ways.

    And your task in understanding these is not to simply assume – well, there’s a literal way of understanding this, but –moving out beyond that, actually asking the question, “does it depend on an interpretation? a way of looking at the world? Does it depend on belief, wagering, etc.?”

    Third frame (now I’m probably running out of time – five? Ah, I’m good). Pragmatics. Use, actions, impact – again, this is probably (and you know when I’m writing these slides, I’m thinking “this is all stuff you know”). Pragmatics – the use of words (when we think of language in particular) but in our context the use of artifacts to do things.

    And, you know, you can draw all kinds of things out. J.L. Austin used to talk about “speech acts”297 and John Searle has his taxonomy of acts that we can do with words298:

    • assertives – which is, you know, the plain ordinary asserting of something, but not just that
    • directives
    • commissives
    • expressives
    • declarations

    But also, in the context of new media, harmful acts. Harassment, bullying, spamming, flaming. One of the things that I’ve had to do over the years is to be in charge of various discussion boards, and in various discussion boards, as I know you know, people flame, people bully, people do all kinds of really nasty stuff. And as soon as you begin to lean on them, they – especially if they’re from south of the border – they go, “Freedom of speech! Freedom of speech!”

    But of course what they’re doing is imposing a semantical interpretation of language on you, that whatever they are doing with language, it must be to express a fact, a state of affairs about the world. But of course, that is not the case at all. There are all kinds of things you can do with language that have utterly nothing to do with expressing an opinion. If I yell “Fire!” in this room I am not uttering, “I am of the opinion that there is a fire.” That is not my intent; that is not the expected outcome. The expected outcome is “everybody get out of the room now.” Something like that. It’s an action, not a statement.

    An expression that is harassing, that is bullying, and there’s the whole set of things that people do in discussion lists, fall under this category of harmful acts. I have interpreted large swaths of online conversation in these discussion areas and in these blogs, as not intentions of making a point, making an argument, whatever, but intentions of harming people, intentions of undermining them, making them feel uncertain, and if you think of language in that way, and if you think of the presentation of media in general in that way, that is also the case, is it not, in advertising.299

    Advertising isn’t intended to make a statement, or even to convince you, advertising is intended to commit an act, to do something, to make you do something, but not to reason your way to doing something, but it will make you do something by, well, as the studies show, undermining your confidence, making you feel inadequate, and all that. These are not propositional uses of language, these are speech acts.

    There’s more. We could be making points through interrogation – of course Heidegger has a whole discussion of that. We could be expressing meaning through use; there’s again a whole – and I have not nearly enough time to talk about that – a whole doctrine about defining words through the use of those words. But this is only the third of six, so…

    The whole set of uses of language in order to reason, to infer, to explain – these are the scientific, the argumentative, the cognitive. And again, a lot of times when people are thinking of critical thinking they are thinking specifically of these kinds of uses of language.

    There are four major categories, and I won’t linger on them, they can all be looked up fairly easily:

    • the plain ordinary description (the assertion that X) which may be a definite description (and we’re back to Russell again), or allegory, metaphor, etc., there are all kinds of ways of describing something, or I can just point to it;
    • definition (to say X is a Y) and again there’s a whole list of different ways of defining things;
    • to make an argument (X therefore Y) and there’s a range of different kinds of arguments, and again you don’t need to note these down, these will all be available after the talk;
    • and then finally, to explain (X because of Y). This is a very common sort of thing.

    And you look at these artifacts, and when you look at them ask yourself, “Could I interpret this as an explanation.” And you realize, “Oh yeah. They are explaining something behind this.

    They’re trying not simply to present or to describe but they’re offering an explanation of why this is the case,” and we come back to Kanye stomping on a kitty. Anyhow.

    The next one: context. This is – you can’t read this and I know you can’t read this and I put in this slide anyway because it’s so compelling. This is ‘occasion-based marketing’. And what I’m trying to draw out here is: people understand that different things say different things in different contexts or different occasions. So we look closely. There are three different occasions: the instrumental, the savoring and the inspirational. Now they totally make these up, but that’s OK, they still work, right?

    So, instrumental. Energy-drink. Red Bull. They’re drinking for a purpose. Savoring. Coffee. And then mysteriously they put “Starbucks.” Well, OK, some people are like that. I’m more of a, actually, my coffee drinking is more instrumental, which is why I go to Tim Horton’s. And then inspirational, wine. The French vintner.

    But now this logic of context gives us a whole set of tropes that will inform the content of the advertisement. Instrumental: price sensitivity, quick and easy, positive nutrition, blah blah blah. Savoring: freshness, flavor, narrative – who thought that narrative that informs coffee? But if you look at Starbucks advertising, narrative is part of the coffee. Inspirational: small craft production, right? Who wants to savour wine from a wine factory. Really. Though if you ask me: best wine, in a box. I acknowledge, that just offended the wine drinkers. Who have knowledge and passion about their wine.

    Context permeates our logic and our language to an extent, I think, that most people are not aware of. Explanation, I like how Bas van Fraassen summed it up beautifully: explanations only make sense in the context of what could have happened instead.300 So, why did the flowers grow beside the house? Because it was warm. If it were cold they would not have grown.

    Because Aunt Mae planted the bulbs. Because if she hadn’t planted the bulbs… you see, you have different explanations based on your expectation of what might have happened instead.

    This is the basis for so many jokes it’s not funny. I can’t go into them now, but… messing around with the explanatory context is a source of humour, and this humour can be found in many of these artifacts.

    Vocabulary – alternatives – the meaning of a word depends on what the alternative words could have been. What is the meaning of the word ‘red’? Depends on what other colours you think there are. If you think there are only two colours in the world, ‘red’ and ‘blue’, the meaning of the word ‘red’ is very broad. [Voice: Stephen, you’re just a few minutes over]

    Change is the last one, and there’s aspects of change: flow, historicity, McLuhan’s four questions to ask about change, and progression.301 There’s an analysis of games based on the structure of change in those games. Etc.

    So those frames can be used to understand new media. The third thesis: fluency in those languages constitutes 21st century learning.

    Now we hear lots of descriptions of 21st century learning, typically as descriptions of content and skills.302 But my argument here, and I think I can make it stick, is that such descriptions are not adequate. Focusing on the tools – I love these tool things – is like focusing on pens, pencils and the printing press instead of the Magna Carta or the Gutenberg Bible. That’s part one. Part two, though: Focusing on content is like focusing on what the Magna Carta or the Gutenberg Bible say instead of what people did with it. We need to go beyond skills, beyond content.

    We go back to Papert, with whom many of you will be familiar, and the philosophy of constructionism: people constructing artifacts are creating media they use to think. Now how are they thinking with it? With those six frames that I just gave you.

    So the questions we need to ask: how do we converse? How do we manage these media? Who is in charge of these media? What vocabulary’s being used? What kind of sense are we making? What kind of languages do we model?

    We come back to the CCK09 course, with which I started, and it’s not about presenting content, or even skills, at all. It’s about creating this space in which people will create artifacts as a conversation. Now what we’re about is encouraging growth in their literacy, in this environment, rather than the acquisition of specific knowledge or specific skills. (This) so much to the point that we don’t care – well, we do a little, but not much – what knowledge they get, we don’t even care what tool they use. What we care about is that they participate in this conversation.

    We want them to use the language of LOLcats to learn how to think – if you’re wondering what this says, it says “Astrophysics made simple” – and then I have some examples which I won’t be able to get into, but you can look at those and discuss those among yourselves, or not, as you will, in the days, weeks and months following this presentation.

    So, that essentially is the presentation ‘Speaking in LOLcats,’ and I thank you for your patience and for staying in the room even though we were a little bit late. And I thank all of you, I assume you’re still online, for staying online. Thank you very much.

    Foonotes

    261 I Can Has Cheezeburger? Website. http://icanhascheezburger.com/

    262 Stephen Downes. Speaking in Lolcats: What Literacy Means in teh Digital Era. Stephen’s Web (weblog) Presentations. November 12, 2009. http://www.downes.ca/presentation/232

    263 Image from I Can Has Cheezeburger. November 11, 2009. http://icanhascheezburger.com/2009/11/11/funnypicturestoespassingin543/

    264 Image from I Can Has Cheezeburger. November 11, 2009. http://icanhascheezburger.com/2009/11/11/funny-pictures-love/

    265 Image from I Can Has Cheezeburger. November 10, 2009. http://icanhascheezburger.com/2009/11/10/funny-pictures-later-years/

    266 Jim Davis. Garfield. http://www.garfield.com/

    267 Image from I Can Has Cheezeburger. November 9, 2009. http://icanhascheezburger.com/2009/11/09/funny-pictures-triangulatered/

    268 Image from I Can Has Cheezeburger. November 10, 2009. http://icanhascheezburger.com/2009/11/10/funny-pictures-dis-watr-is-wet/

    269 Image from I Can Has Cheezeburger. November 10, 2009. http://icanhascheezburger.com/2009/11/10/funny-pictures-same-since-decaff/

    270 Dyna Moe’s Photostream. Flickr. No longer extant. http://www.flickr.com/photos/nobodyssweetheart/40898054

    271 Still from video. Associated Press. Raw Video: Soccer Player Throws Fist, Pulls Hair. YouTube. November 6, 2009. http://www.youtube.com/watch?v=FMAtxuCpsMU

    274 David Emery. The Tourist Guy of 9/11. About.com Urban Legends. Accessed November, 2009. http://urbanlegends.about.com/library/blphoto-wtc.htm

    275 David Emery. The Tourist Guy of 9/11. About.com Urban Legends. Accessed November, 2009. http://urbanlegends.about.com/library/blphoto-wtc.htm

    276 David Emery. The Tourist Guy of 9/11. About.com Urban Legends. Accessed November, 2009. http://urbanlegends.about.com/library/blphoto-wtc.htm

    277 Mark Liberman. I’m a? Language Log. September 9, 2009. http://languagelog.ldc.upenn.edu/nll/?p=1752

    278 Vicked Vicky. Body Language: Actions Do Speak Louder Than Words. ExciteFun. October 28, 2008. http://forum.xcitefun.net/body- language-actions-do-speak-louder-than-words-t13371.html

    279 Army Strong Stories. Arab Clothing. Undated. http://armystrongstories.com/blogAssets/wayne-wall/12 JUN Arab Clothing.jpg

    280 Dale Carnegie. How to Win Friends and Influence People. 1936. Simon & Schuster; Reissue edition. November 3, 2009. http://www.amazon.ca/How-Win-Friends-Influence-People/dp/0671723650

    281 The Learning Channel. What Not to Wear. Television Series. January 18, 2003 – present. http://tlc.howstuffworks.com/tv/what-not-to-wear

    282 Randall Munroe. Online Communities. XKCD number 256. 2007. http://xkcd.com/256/

    283 Wikibooks. Consciousness Studies/The Philosophical Problem/Machine Consciousness. Undated; Accessed November 12, 2009. http://en.wikibooks.org/wiki/Consciousness_Studies/The_Philosophical_Problem/Machine_Consciousness

    284 Fred Dretske. Knowledge and the Flow of Information. The University of Chicago Press. 1999. http://www.press.uchicago.edu/ucp/books/book/distributed/K/bo3642299.html

    285 Lutz Goetzmann and Kyrill Schwegler. Semiotic aspects of the countertransference: Some observations on the concepts of the ‘immediate object’ and the ‘interpretant’ in the work of Charles S. Peirce. International Journal of Psycho-Analysis, 85:1423-1438. 2004. See also Jay Zeman, Pierce’s Theory of Signs, http://www.clas.ufl.edu/users/jzeman/peirces_theory_of_signs.htm

    286 Note that by ‘language-based’ here I refer to the way of looking at learning and science as composed of traditional text-based language, i.e., sentences and words in English or Japanese, etc.

    287 Chris Cocker. Leave Britney Alone. YouTube. September 10, 2007. http://www.youtube.com/watch?v=kHmvkRoEowc

    288 OK, Russell’s actual example is “the present king of France is bald” – the ‘brakeless trains’ example has a much older history as an example of the existential fallacy. I probably got it from Richard Braithwaite, Scientific Explanation: A Study of the Function of Theory, Probability and Law in Science. Cambridge University Press. May 1, 1968. p. 305. But the point still holds – ‘the present king of France’ can be true or false even if there is no king of France (it’s false, because there is no king of France).

    289 I.e. traditional text-based language.

    290 Charles W. Morris. Foundations of the theory of signs. International encyclopedia of unified science. The University of Chicago press. 1953. http://www.amazon.com/Foundations-Charles-International-encyclopedia-unified/dp/B00087C2PQ

    291 Fail Blog. Weblog. http://failblog.org/

    292 Chris Waigl. The Eggcorn Database. Website. http://eggcorns.lascribe.net/

    293 Television Tropes and Idioms. Wiki web site. http://tvtropes.org/pmwiki/pmwiki.php/Main/HomePage

    294 IMDB. Three’s Company. TV Series (1976-1984). http://www.imdb.com/title/tt0075596/

    295 Rudolf Carnap. Logical Foundations of Probability. University of Chicago Press; 2nd edition. 1967. http://www.amazon.com/Logical- foundations-probability-Rudolf-Carnap/dp/B0006P9S8Y

    296 Hans Reichenbach. The Theory of Probability. University of California; Second Ed edition. 1949. http://books.google.ca/books?id=WUm_2lAvj2cC&printsec=front_cover&redir_esc=y

    297 J.L. Austin. How to do Things with Words: The William James Lectures delivered at Harvard University in 1955. Ed. J. O. Urmson and Marina Sbisà, Oxford: Clarendon. 1962. http://www.amazon.com/How-Do-Things-Words-Lectures/dp/0674411528

    298 John Searle. Speech Acts: An Essay in the Philosophy of Language (1969). Cambridge University Press; New Ed edition. January 2, 1969. http://www.amazon.co.uk/Speech-Acts-Essay-Philosophy-Language/dp/052109626X

    299 Randall Munroe. Pickup Artist. XKCD number 1027. 2011. http://xkcd.com/1027/

    300 Bas C. van Fraassen. The Scientific Image. Clarendon Press. May 1, 1995. http://www.amazon.ca/Scientific-Image-Bas-van- Fraassen/dp/0198244274

    301 Marshall Soules. McLuhan Light and Dark. Wayback. 1996.

    302 Partnership for 21st Century Skills. http://www.p21.org/ November 1, 2009. Website not responding. www.21stcenturyskills.org/documents/MILE_Guide_091101.pdf

    Richmond Hill, Ontario, November 12, 2009

     

     

     

    This chapter was adapted from the full collection of essays by Stephen Downes available via PDF at https://www.oerknowledgecloud.org/archive/Connective_Knowledge-19May2012.pdf

    The text was reformatted to be presented in a Pressbook but the content was maintained to the best of the editor’s ability.