Title:“Social psychology's ideal of studying social behavior, research efforts in speech, journalism,and communication.”
General Intention: The myth of human intellectual creativity
If the human neural system is a biological mechanism (genetics and evolution prove this is so), then it is shorn of magic, spirituality and mysticism. It can not create gold from lead, truth from falsehood, or knowledge from dogma. Any mechanism has a finite capability. No machine can provide more than its fuel. There is always a loss in every machine, none may even reach 100% efficiency, much less produce more than it is given. The human brain can no longer be considered creative. It can not create knowledge, it may only discover it. The knowledge produced by thought must be contained in the premises (input data) on which that thought is based. Knowledge may be discovered by the human brain only when it has adequate truthful data on which to work. If the input knowledge (data) is inadequate or untruthful the output conclusions can be disastrous, and often are. Using unproven premises negates the reasoning that follows by the amount of its error. In the computer world we refer to this process limitation as "garbage in, garbage out."Conjecture, imagination, hearsay, and introspection are all useful tools in the formation of new theory. They will not reliably provide working knowledge, separately or collectively. Theory must be proven before it is applied, whether the subject is a space shuttle, bridge, or airplane. It is even more important if the subject is human. A human culture should not be used as experimental fodder.
I.Introduction: Interpersonal Communication
Interpersonal Communication involves the study of both the processes and effects of social interaction, usually in face-to-face situations. Both verbal and nonverbal behaviors are studied in laboratory and naturalistic contexts. Cognitions, emotions, and discourse patterns occurring during conflict, lying, and persuasion are some of the factors commonly studied. Communication in health-related contexts as well as personal and family relationships is two important contexts in which theories are applied.Organizational Communication is the study of human interaction within complex organizations, and the management of organizational behavior. Course work in organizational communication offers both qualitative approaches to data analysis (category development and descriptive observation techniques) and quantitative approaches (measurement, psychological categories, and behavioral science research designs). The faculty views these approaches as complementary; many students attempt to achieve mastery in both modes.Rhetorical and Language Studies area focuses on how human symbols affect social and political change. Although rhetoric has been a popular area of study since antiquity, the Department focuses on such contemporary matters as political campaigning, culture and communication, social movement rhetoric, ethics and persuasion, the nature of public argument, discourse and knowledge, the formation of language communities, cognitive linguistics, etc. These matters are treated in three distinct sub-areas: (1) Rhetorical Theory and Criticism, focusing on how public discourse is conceived and executed, with special attention to the analysis of persuasive and cultural texts; (2) Political Communication, examination of how political leaders and the mass media change public opinion and fashion legislative policy; and (3) Semiotic Studies offers training in the naturalistic study of human symbol systems and consideration of how linguistic and gestural behaviors affect everyday social interaction.Rhetoric and Language Studies embraces a variety of research methodologies, including historiography, archival studies, textual criticism, conversation analysis, content analysis, etc. Interdisciplinary opportunities to study Rhetoric and Language also abound at U.T., including work with internationally famous scholars in the Departments of English, Linguistics, Anthropology, etc. The scientific study of communication phenomena is justified in terms of a gradual movement toward truth. Armed with a diverse range of theories and methods, the communication researcher produces evidence which will either support or refute a particular claim to the nature of the "truth" of the phenomenon they are investigating. In many important respects, the communication researcher is like the protagonist in a hard-boiled detective story. Both the detective and the communication researcher seek the truth by considering the artifacts and traces left behind. The detective considers clues which will reveal the truth of the killer's identity while the communication researcher considers empirical evidence which relate to the truth value of particular hypotheses. To continue the analogy further, the route to the truth is often fraught with red herrings; clues which indicate the truth in the guise of a suspect who is, in fact, innocent. In a good detective story, the revelation of the killer's identity at the end of the story may ultimately contradict the expectations that have been created in the audience by the structure of the story and its handling of the evidence that is presented. While all the clues seemed to point to one, or two, suspects, the truth of the real killer was someone quite different. When the clues are rearranged by the detective in the climax of the story, then the truth can be seen in the new pattern of evidence. The story is said to have a twist in the tail.The evidence presented in the story can suggest many suspects as the killer. Until the point where the truth of the killer's identity is revealed, the evidence is equivocal and claims to the truth are competitive. This creates the sensation of suspense in the story. The scientific study of communication phenomena parallels this situation. It too has many clues and many competing theories to account for them. However, for the communication researcher, there is one very important difference from the fictional narrative of the detective story: the truth of the matter is not revealed. The communication researcher remains in a constant state of suspense; believing the truth to be of one kind until new evidence is produced which suggests it may be of another kind. The communication researcher also operates with the knowledge that new clues always have the potential to show up at any time. The final and definitive truth always remains one step ahead of the conjectures of the communication researcher. The communication researcher works in a state of constant suspense that is never totally resolved. This state represents an open site for many accounts of the true nature of communication phenomena. The activity of the communication researcher becomes one of persuasion rather than demonstration. A case for the truth must be made which is more persuasive than other accounts. Like the detective, the communication researcher must arrange the clues to find the killer. The tenets of scientific method are employed in the making of this case using the language of probability, significance, control, and validity. The account may seem very convincing indeed from this point of view. However, because the truth has not been revealed at this point, there is always the possibility of the twist in the tail. Even though all the clues and reasoning of the communication researcher strongly suggest one particular version of the reality of a phenomena, the possibility that a new arrangement will prevail is always present. With this comes the conditions of possibility for alternative accounts of the truth which run counter to the expectations generated by the scientific account of the evidence. The constant potentiality of the twist in the tail makes possible the existence of truths quite different from that espoused by the scientific reasoning of the communication researcher. These accounts generate their own kind of validity through the internal coherence they create in their handling of the evidence they draw upon. In this paper, the topic of subliminal persuasion is utilized as a site which exemplifies the situation described above. Subliminal persuasion is considered here as a topic that has been addressed as a communication phenomenon by the academic disciplines of experimental psychology and marketing. A particular account regarding the "truth" of the phenomena has been produced through these diciplines' consideration of various kinds of evidence. However, subliminal persuasion is also the driving force behind a lucrative industry of subliminal self-help audio cassettes which exists on the basis of a knowledge of the phenomena that is quite different from that produced by experimental psychologists. While experimental psychology maintains that subliminal persuasion is not a phenomenon capable of significantly changing people's behaviors and attitudes, the knowledge underlying the emergence and success of subliminal self- help tapes asserts that subliminal persuasion is a valid and powerful phenomenon. How can this claim co-exist with the truth as espoused by the scientific community? It is suggested here that the knowledge underlying the subliminal tape industry represents a different arrangement of the evidence. As long as the possibility of the twist in the tail is present, then there is reason to believe this knowledge and its validation will continue to co-exist alongside the knowledge of science. Therefore, it is important to study and understand the nature of these accounts and the structures by which they maintain their coherence and credibility.
II.Major Goal of the Studies:
1. It appears that media and communications education is confronted with a dilemma: There is a growing need to include statistical understanding, reasoning and thinking (Garfield et al, 2002) as learning goals in the curriculum while most communication students have a negative attitude toward statistics and perform poorly in statistics courses and tend to avoid the subject of statistics altogether if they can.
2. Because mass communication media have a central place in social, cultural, political, economic and global upliftment of contemporary human life especially for those countries which have entered, or are seeking to enter, the age of information society, mass media provide vast career opportunities.Since the traditional media of communication are today complemented by new technologies, a new generation of media professionals is required to work in this converging atmosphere. We want to prepare people who have the skills and knowledge to thrive in the ever-changing competitive environment.
3. Many of the interviews and presentations I do today address the question: why do we need doctoral study in design? This question most often comes from practitioners and faculty in a field that has only a short history of research and a long tradition of training in know-how, in the craft of solving problems with the information immediately at hand. It is a reasonable question to ask about a field that is not well understood by the public or by popular media that view design mostly in terms of how things look. But ironically, the greatest skepticism about expanding design research programs seems to reside within the discipline itself, where there is ongoing debate about what constitutes design knowledge. By contrast, the notion of a design research culture does not seem odd to people in fields outside design, where among the defining characteristics of professions, as opposed to trades, are segments of practice in which the sole activity is the generation of new knowledge. There is broad recognition that knowledge generation sustains the evolution of a discipline and particular interest in the value of design research in cross-disciplinary investigations.
4. Concepts of Leadership
I used to think that running an organization was equivalent to conducting a symphony orchestra. But I don't think that's quite it; it's more like jazz. There is more improvisation. — Warren Bennis
Good leaders are made not born. If you have the desire and willpower, you can become an effective leader. Good leaders develop through a never ending process of self-study, education, training, and experience (Jago, 1982). This guide will help you through that process.
To inspire your workers into higher levels of teamwork, there are certain things you must be, know, and, do. These do not come naturally, but are acquired through continual work and study. Good leaders are continually working and studying to improve their leadership skills; they are NOT resting on their laurels.
Definition of Leadership
The meaning of a message is the change which it produces in the image. — Kenneth Boulding in The Image: Knowledge in Life and Society
Before we get started, lets define leadership. Leadership is a process by which a person influences others to accomplish an objective and directs the organization in a way that makes it more cohesive and coherent. This definition is similar to Northouse's (2007, p3) definition — Leadership is a process whereby an individual influences a group of individuals to achieve a common goal.Leaders carry out this process by applying their leadership knowledge and skills. This is called Process Leadership (Jago, 1982). However, we know that we have traits that can influence our actions. This is called Trait Leadership (Jago, 1982), in that it was once common to believe that leaders were born rather than made. These two leadership types are shown in the chart below (Northouse, 2007, p5):
While leadership is learned, the skills and knowledge processed by the leader can be influenced by his or hers attributes or traits, such as beliefs, values, ethics, and character. Knowledge and skills contribute directly to the process of leadership, while the other attributes give the leader certain characteristics that make him or her unique.
Skills, knowledge, and attributes make the Leader, which is one of the:
Four Factors of Leadership
There are four major factors in leadership (U.S. Army, 1983):
Leader
You must have an honest understanding of who you are, what you know, and what you can do. Also, note that it is the followers, not the leader or someone else who determines if the leader is successful. If they do not trust or lack confidence in their leader, then they will be uninspired. To be successful you have to convince your followers, not yourself or your superiors, that you are worthy of being followed.
Followers
Different people require different styles of leadership. For example, a new hire requires more supervision than an experienced employee. A person who lacks motivation requires a different approach than one with a high degree of motivation. You must know your people! The fundamental starting point is having a good understanding of human nature, such as needs, emotions, and motivation. You must come to know your employees' be, know, and do attributes.
Communication
You lead through two-way communication. Much of it is nonverbal. For instance, when you "set the example," that communicates to your people that you would not ask them to perform anything that you would not be willing to do. What and how you communicate either builds or harms the relationship between you and your employees.
Situation
All situations are different. What you do in one situation will not always work in another. You must use your judgment to decide the best course of action and the leadership style needed for each situation. For example, you may need to confront an employee for inappropriate behavior, but if the confrontation is too late or too early, too harsh or too weak, then the results may prove ineffective.
Also note that the situation normally has a greater effect on a leader's action than his or her traits. This is because while traits may have an impressive stability over a period of time, they have little consistency across situations (Mischel, 1968). This is why a number of leadership scholars think the Process Theory of Leadership is a more accurate than the Trait Theory of Leadership.
Various forces will affect these four factors. Examples of forces are your relationship with your seniors, the skill of your followers, the informal leaders within your organization, and how your organization is organized.
Boss or Leader?
Although your position as a manager, supervisor, lead, etc. gives you the authority to accomplish certain tasks and objectives in the organization (called Assigned Leadership), this power does not make you a leader, it simply makes you the boss (Rowe, 2007). Leadership differs in that it makes the followers want to achieve high goals (called Emergent Leadership), rather than simply bossing people around (Rowe, 2007). Thus you get Assigned Leadership by your position and you display Emergent Leadership by influencing people to do great things.
Bass' Theory of Leadership
Bass' theory of leadership states that there are three basic ways to explain how people become leaders (Stogdill, 1989; Bass, 1990). The first two explain the leadership development for a small number of people. These theories are:
- Some personality traits may lead people naturally into leadership roles. This is the Trait Theory.
- A crisis or important event may cause a person to rise to the occasion, which brings out extraordinary leadership qualities in an ordinary person. This is the Great Events Theory.
- People can choose to become leaders. People can learn leadership skills. This is the Transformational or Process Leadership Theory. It is the most widely accepted theory today and the premise on which this guide is based.
Leadership Models
Leadership models help us to understand what makes leaders act the way they do. The ideal is not to lock yourself in to a type of behavior discussed in the model, but to realize that every situation calls for a different approach or behavior to be taken. Two models will be discussed, the Four Framework Approach and the Managerial Grid.
Four Framework Approach
In the Four Framework Approach, Bolman and Deal (1991) suggest that leaders display leadership behaviors in one of four types of frameworks: Structural, Human Resource, Political, or Symbolic.
This model suggests that leaders can be put into one of these four categories and there are times when one approach is appropriate and times when it would not be. That is, any style can be effective or ineffective, depending upon the situation. Relying on only one of these approaches would be inadequate, thus we should strive to be conscious of all four approaches, and not just depend on one or two. For example, during a major organization change, a Structural leadership style may be more effective than a Symbolic leadership style; during a period when strong growth is needed, the Symbolic approach may be better. We also need to understand ourselves as each of us tends to have a preferred approach. We need to be conscious of this at all times and be aware of the limitations of just favoring one approach.
Structural Framework
In an effective leadership situation, the leader is a social architect whose leadership style is analysis and design. While in an ineffective leadership situation, the leader is a petty tyrant whose leadership style is details. Structural Leaders focus on structure, strategy, environment, implementation, experimentation, and adaptation.
Human Resource Framework
In an effective leadership situation, the leader is a catalyst and servant whose leadership style is support, advocating, and empowerment. while in an ineffective leadership situation, the leader is a pushover, whose leadership style is abdication and fraud. Human Resource Leaders believe in people and communicate that belief; they are visible and accessible; they empower, increase participation, support, share information, and move decision making down into the organization.
Political Framework
In an effective leadership situation, the leader is an advocate, whose leadership style is coalition and building. While in an ineffective leadership situation, the leader is a hustler, whose leadership style is manipulation. Political leaders clarify what they want and what they can get; they assess the distribution of power and interests; they build linkages to other stakeholders, use persuasion first, then use negotiation and coercion only if necessary.
Symbolic Framework
In an effective leadership situation, the leader is a prophet, whose leadership style is inspiration. While in an ineffective leadership situation, the leader is a fanatic or fool, whose leadership style is smoke and mirrors. Symbolic leaders view organizations as a stage or theater to play certain roles and give impressions; these leaders use symbols to capture attention; they try to frame experience by providing plausible interpretations of experiences; they discover and communicate a vision.
For an activity, see Bolman and Deal's Four Framework Approach.
Managerial Grid
The Blake and Mouton Managerial Grid, also known as the Leadership Grid (1985) uses two axes:
- "Concern for people" is plotted using the vertical axis
- "Concern for task or results" is plotted along the horizontal axis.
They both have a range of 0 to 9. The notion that just two dimensions can describe a managerial behavior has the attraction of simplicity. These two dimensions can be drawn as a graph or grid:
Most people fall somewhere near the middle of the two axes — Middle of the Road. But, by going to the extremes, that is, people who score on the far end of the scales, we come up with four types of leaders:
- Authoritarian — strong on tasks, weak on people skills
- Country Club — strong on people skills, weak on tasks
- Impoverished — weak on tasks, weak on people skills
- Team Leader — strong on tasks, strong on people skills
The goal is to be at least in the Middle of the Road but preferably a Team Leader — that is, to score at least between a 5,5 to 9,9.
Authoritarian Leader (high task, low relationship)
People who get this rating are very much task oriented and are hard on their workers (autocratic). There is little or no allowance for cooperation or collaboration. Heavily task oriented people display these characteristics: they are very strong on schedules; they expect people to do what they are told without question or debate; when something goes wrong they tend to focus on who is to blame rather than concentrate on exactly what is wrong and how to prevent it; they are intolerant of what they see as dissent (it may just be someone's creativity), so it is difficult for their subordinates to contribute or develop.
Team Leader (high task, high relationship)
This type of person leads by positive example and endeavors to foster a team environment in which all team members can reach their highest potential, both as team members and as people. They encourage the team to reach team goals as effectively as possible, while also working tirelessly to strengthen the bonds among the various members. They normally form and lead some of the most productive teams.
Country Club Leader (low task, high relationship)
This person uses predominantly reward power to maintain discipline and to encourage the team to accomplish its goals. Conversely, they are almost incapable of employing the more punitive coercive and legitimate powers. This inability results from fear that using such powers could jeopardize relationships with the other team members.
Impoverished Leader (low task, low relationship)
A leader who uses a "delegate and disappear" management style. Since they are not committed to either task accomplishment or maintenance; they essentially allow their team to do whatever it wishes and prefer to detach themselves from the team process by allowing the team to suffer from a series of power struggles. The most desirable place for a leader to be along the two axes at most times would be a 9 on task and a 9 on people — the Team Leader. However, do not entirely dismiss the other three. Certain situations might call for one of the other three to be used at times. For example, by playing the Impoverished Leader, you allow your team to gain self-reliance. Be an Authoritarian Leader to instill a sense of discipline in an unmotivated worker. By carefully studying the situation and the forces affecting it, you will know at what points along the axes you need to be in order to achieve the desired result.
For an activity, see The Leadership Matrix.
Total Leadership
What makes a person want to follow a leader? People want to be guided by those they respect and who have a clear sense of direction. To gain respect, they must be ethical. A sense of direction is achieved by conveying a strong vision of the future. When a person is deciding if she respects you as a leader, she does not think about your attributes, rather, she observes what you do so that she can know who you really are. She uses this observation to tell if you are an honorable and trusted leader or a self-serving person who misuses authority to look good and get promoted. Self-serving leaders are not as effective because their employees only obey them, not follow them. They succeed in many areas because they present a good image to their seniors at the expense of their workers.
Be Know Do
The basis of good leadership is honorable character and selfless service to your organization. In your employees' eyes, your leadership is everything you do that effects the organization's objectives and their well-being. Respected leaders concentrate on (U.S. Army, 1983):
- what they are [be] (such as beliefs and character)
- what they know (such as job, tasks, and human nature)
- what they do (such as implementing, motivating, and providing direction).
What makes a person want to follow a leader? People want to be guided by those they respect and who have a clear sense of direction. To gain respect, they must be ethical. A sense of direction is achieved by conveying a strong vision of the future.
The Two Most Important Keys to Effective Leadership
According to a study by the Hay Group, a global management consultancy, there are 75 key components of employee satisfaction (Lamb, McKee, 2004). They found that:
- Trust and confidence in top leadership was the single most reliable predictor of employee satisfaction in an organization.
- Effective communication by leadership in three critical areas was the key to winning organizational trust and confidence:
- Helping employees understand the company's overall business strategy.
- Helping employees understand how they contribute to achieving key business objectives.
- Sharing information with employees on both how the company is doing and how an employee's own division is doing — relative to strategic business objectives.
So in a nutshell — you must be trustworthy and you have to be able to communicate a vision of where the organization needs to go. The next section, "Principles of Leadership", ties in closely with this key concept.
Principles of Leadership
To help you be, know, and do, follow these eleven principles of leadership (U.S. Army, 1983). Note that later chapters in this guide expand on these and provide tools for implementing them:
- Know yourself and seek self-improvement - In order to know yourself, you have to understand your be, know, and do, attributes. Seeking self-improvement means continually strengthening your attributes. This can be accomplished through self-study, formal classes, reflection, and interacting with others.
- Be technically proficient - As a leader, you must know your job and have a solid familiarity with your employees' tasks.
- Seek responsibility and take responsibility for your actions - Search for ways to guide your organization to new heights. And when things go wrong, they always do sooner or later — do not blame others. Analyze the situation, take corrective action, and move on to the next challenge.
- Make sound and timely decisions - Use good problem solving, decision making, and planning tools.
- Set the example - Be a good role model for your employees. They must not only hear what they are expected to do, but also see. We must become the change we want to see - Mahatma Gandhi
- Know your people and look out for their well-being - Know human nature and the importance of sincerely caring for your workers.
- Keep your workers informed - Know how to communicate with not only them, but also seniors and other key people.
- Develop a sense of responsibility in your workers - Help to develop good character traits that will help them carry out their professional responsibilities.
- Ensure that tasks are understood, supervised, and accomplished - Communication is the key to this responsibility.
- Train as a team - Although many so called leaders call their organization, department, section, etc. a team; they are not really teams...they are just a group of people doing their jobs.
- Use the full capabilities of your organization - By developing a team spirit, you will be able to employ your organization, department, section, etc. to its fullest capabilities.
Attributes of Leadership
If you are a leader who can be trusted, then those around you will grow to respect you. To be such a leader, there is a Leadership Framework to guide you:
BE KNOW DO
BE a professional. Examples: Be loyal to the organization, perform selfless service, take personal responsibility.
BE a professional who possess good character traits. Examples: Honesty, competence, candor, commitment, integrity, courage, straightforwardness, imagination.
KNOW the four factors of leadership — follower, leader, communication, situation.
KNOW yourself. Examples: strengths and weakness of your character, knowledge, and skills.
KNOW human nature. Examples: Human needs, emotions, and how people respond to stress.
KNOW your job. Examples: be proficient and be able to train others in their tasks.
KNOW your organization. Examples: where to go for help, its climate and culture, who the unofficial leaders are.
DO provide direction. Examples: goal setting, problem solving, decision making, planning.
DO implement. Examples: communicating, coordinating, supervising, evaluating.
DO motivate. Examples: develop morale and esprit de corps in the organization, train, coach, counsel.
Environment
Every organization has a particular work environment, which dictates to a considerable degree how its leaders respond to problems and opportunities. This is brought about by its heritage of past leaders and its present leaders.
Goals, Values, and Concepts
Leaders exert influence on the environment via three types of actions:
- The goals and performance standards they establish.
- The values they establish for the organization.
- The business and people concepts they establish.
Successful organizations have leaders who set high standards and goals across the entire spectrum, such as strategies, market leadership, plans, meetings and presentations, productivity, quality, and reliability. Values reflect the concern the organization has for its employees, customers, investors, vendors, and surrounding community. These values define the manner in how business will be conducted. Concepts define what products or services the organization will offer and the methods and processes for conducting business. These goals, values, and concepts make up the organization's "personality" or how the organization is observed by both outsiders and insiders. This personality defines the roles, relationships, rewards, and rites that take place.
Roles ad Relationships
Roles are the positions that are defined by a set of expectations about behavior of any job incumbent. Each role has a set of tasks and responsibilities that may or may not be spelled out. Roles have a powerful effect on behavior for several reasons, to include money being paid for the performance of the role, there is prestige attached to a role, and a sense of accomplishment or challenge. Relationships are determined by a role's tasks. While some tasks are performed alone, most are carried out in relationship with others. The tasks will determine who the role-holder is required to interact with, how often, and towards what end. Also, normally the greater the interaction, the greater the liking. This in turn leads to more frequent interaction. In human behavior, its hard to like someone whom we have no contact with, and we tend to seek out those we like. People tend to do what they are rewarded for, and friendship is a powerful reward. Many tasks and behaviors that are associated with a role are brought about by these relationships. That is, new task and behaviors are expected of the present role-holder because a strong relationship was developed in the past, either by that role-holder or a prior role-holder.
Culture and Climate
There are two distinct forces that dictate how to act within an organization: culture and climate.
Each organization has its own distinctive culture. It is a combination of the founders, past leadership, current leadership, crises, events, history, and size (Newstrom, Davis, 1993). This results in rites: the routines, rituals, and the "way we do things." These rites impact individual behavior on what it takes to be in good standing (the norm) and directs the appropriate behavior for each circumstance.
The climate is the feel of the organization, the individual and shared perceptions and attitudes of the organization's members (Ivancevich, Konopaske, Matteson, 2007). While the culture is the deeply rooted nature of the organization that is a result of long-held formal and informal systems, rules, traditions, and customs; climate is a short-term phenomenon created by the current leadership. Climate represents the beliefs about the "feel of the organization" by its members. This individual perception of the "feel of the organization" comes from what the people believe about the activities that occur in the organization. These activities influence both individual and team motivation and satisfaction, such as:
- How well does the leader clarify the priorities and goals of the organization? What is expected of us?
- What is the system of recognition, rewards, and punishments in the organization?
- How competent are the leaders?
- Are leaders free to make decisions?
- What will happen if I make a mistake?
Organizational climate is directly related to the leadership and management style of the leader, based on the values, attributes, skills, and actions, as well as the priorities of the leader. Compare this to "ethical climate" — the "feel of the organization" about the activities that have ethical content or those aspects of the work environment that constitute ethical behavior. The ethical climate is the feel about whether we do things right; or the feel of whether we behave the way we ought to behave. The behavior (character) of the leader is the most important factor that impacts the climate.
On the other hand, culture is a long-term, complex phenomenon. Culture represents the shared expectations and self-image of the organization. The mature values that create "tradition" or the "way we do things here." Things are done differently in every organization. The collective vision and common folklore that define the institution are a reflection of culture. Individual leaders, cannot easily create or change culture because culture is a part of the organization. Culture influences the characteristics of the climate by its effect on the actions and thought processes of the leader. But, everything you do as a leader will affect the climate of the organization.
For an activity, see Culture and Climate.
For information on culture, see Long-Term Short-Term Orientation.
The Process of Great Leadership
The road to great leadership (Kouzes & Posner, 1987) that is common to successful leaders:
- Challenge the process - First, find a process that you believe needs to be improved the most.
- Inspire a shared vision - Next, share your vision in words that can be understood by your followers.
- Enable others to act - Give them the tools and methods to solve the problem.
- Model the way - When the process gets tough, get your hands dirty. A boss tells others what to do, a leader shows that it can be done.
- Encourage the heart - Share the glory with your followers' hearts, while keeping the pains within your own.
5.Citizens of developed and developing nations alike live in a global information context where information is a commodity that currently rivals factors such as control of natural resources, capital and industrial production as an important determinant of global power.
The traditional arbiters and purveyors of "culture" (including governments, churches, educational and scientific organizations) have lost much of their influence when compared to the influence of mass media. Public discourse increasingly takes place around an agenda set by the media. People, whether they live in Manila, Moscow or Morgantown, now have nearly simultaneous access to the same images and viewpoints in the interpretation of events. In long industrialized nations and newly industrialized nations alike, the social, political and cultural arenas of life are defined and debated in ways controlled by the media. The media play an ever more important role in such events as political campaigns, the overthrow and creation of governments, and in the way wars are planned, fought and interpreted. The media increasingly shape consciousness and define the quest for the meaning of life. The Regulation of a Public Resource in the Public Interest Commitments to public service obligations, once a part of a social contract involving the government, its citizens and the media industries, have been abrogated in the United States in favor of marketplace regulation, a concept now being exported to other nations as well. Experience during the decade of the '80s and following has shown that this type of regulation has not served the public interest but rather has pandered to what interests the public. At the same time that mass communication has come to be more important to social and cultural processes, the media themselves are undergoing great change. Traditional definitions of media practices, such as the line between entertainment and news, have become blurred. In the electronic media, producers now enjoy greater freedom in what may be "aired" regardless of consideration of the nature of the audience or community sensibilities, which once were honored. The media, particularly television, have enormous impact in the lives of people and societies over a relatively short time. This impact may at various times be positive or negative, but currently the negative impact of the entertainment media, advertising, and even the way news programs are constructed appear to outweigh the more positive benefits.
III.
Components of Media Communication Education.
Department of Teacher Education. University of Helsinki.
Research Report 150.
This article is concerned with analysing media communication education or educational media communication (Footnote 1). At the same time, some focus will be laid on modern information and communication technologies (MICT). The main idea is to put forward a number of components whose role and function, however, calls for further reconsideration.
Keywords: Information and Communication Technologies; Media Communication; IT; CAI; CAL; CMHC; CIT; MICT.
"You can lead a computer to the Superhighway but you can't make it think." (Des Wallemoor)
Users' Relation to Technology
One of the basic issues that has not yet received much attention is the end users' relation to technology in general. This question is fundamentally concerned with epistemological, ontological and axiological questions not always explicitly expressed when discussing the introduction of modern information and communication technologies (MICT) into schools. One way to classify users' relations is to point to their inner feelings of how they react to new technologies when having to face them or when getting in a position in which they will have to express their standpoint towards MICT. The following is an introductory classification which should be tested in different groups of end users, but usually it does not seem difficult to identify in the common-room, for instance, individuals representing the following categories
- Staying involuntarily or being pushed outside (dropouts)
- Being worried about or scared of MICT
- Hating or disliking any idea of having to use MICT in one's
- work
- Having a superficial or reluctant attitude towards MICT
- Following suit but with no authentic enthusiasm
- Imitating others (colleagues, friends) so as not to be taken
- for backbenchers
- At once hating and loving MICT
- Getting fond, perhaps little by little, of educational
- applications of MICT
- Pioneering, being enthusiastic about educational
- applications of MICT
- Acting as crackers, hackers or just leading the way
What Are Modern Information and Communication Technologies?
What are or what is meant by modern information and communication technologies? Historically speaking, first there was ADP (automatic data-processing), soon to be replaced by computer-based education (CBE), which was further divided into CAI (computer-assisted instruction) or CAL (computer-assisted learning) and into CMI (computer-managed instruction) (a more detailed analysis, see e.g., Tella 1991; Tella 1994a; Tella 1994b).
The general construct so far has been IT (information technology, sometimes technologies). In a larger perspective, IT is no longer enough, although it is still used occasionally in educational contexts as well. For instance, in the English National Curriculum, IT is said to provide a natural medium for creating, storing, retrieving and communicating information and thus being an ideal resource in language learning (Modern Languages 1992, F1). Information as such is not enough; what is needed is communication. Therefore it is easily understood why a lot of researchers have started talking about communication and information technologies (les technologies de communication et d'information), abbreviated as CIT. This construct is often used in Unesco documents as well. However, in order to avoid any embarrassing connotations due to the pronunciation of that abbreviation in English, it is generally considered wiser to talk about ICT (information and communication technologies) or NICT (new information and communication technologies).
Speaking of "new" information and communication technologies is, in the final analysis, somewhat misleading, as many of the applications are based on rather old technology, i.e., on the telephone network. If "new" is used, it should be understood to imply the latest services, especially telematic and electronic. In this article, these technologies will be referred to as MICT (modern information and communication technologies), as it can be considered a wide enough construct to cover all up-to-date technologies and, more importantly, all pedagogical applications of earlier IT. One way of classifying MICT is given in TABLE 1.
The classification in TABLE 1 is meant to give a general outline on different ways of using computer applications in education. It enumerates some of the main technological developments (left column), with a few references to possible educational applications or emphases (right column). The two columns are interdependent and many components overlap as they tend to have features in common, which also means that emphases mentioned in the right column belong to several technologies on the left and vice versa. The temporal dimension covered by the left-column is fairly long, reaching up to the 21st century (e.g., ubiquitous computing). The previous classifications usually concentrate on the contemporary applications only (cf. e.g., McGrath 1990, 50; see Tella 1994a, 66; Bates 1993, 4; see Tella 1994a, 68; Marra & Jonassen 1993, 63; see Tella 1994b, 152; LeBaron & Bragg 1993, 87; see Tella 1994b, 155).
As new applications keep on being launched, many educators, no doubt, whole-heartedly agree with Collis & Verwijs's (1995, 5) statement that "we seem now to have moved from a time of comparative simplicity to one of a bewildering range of developments and terminology".
At the moment it is easy to understand that a computer which is not logged on to a communications network, is a stand-alone machine, still useful for many purposes, but with no real asset regarding a networked learning environment. From an educational point of view, it could best be described as specific to and within a given computer application. Broadly speaking, computer-based training in its conventional form can also be regarded as a general educational application.
TABLE 1. CLASSIFICATION OF MODERN INFORMATION AND COMMUNICATION TECHNOLOGIES.
Computers as stand-alone machines Audio conferencing Audio-graphics Telefax Computer conferencing (on-line interaction, e.g. e-mail, chat) Interactive CD-ROM Multimedia, hypermedia Desktop video Video conferencing Virtual reality Virtual computing Ubiquitous computing |
| "Traditional" software Computer-based training (CBT) Traditional distance education Rapid information exchange Distance and multi-mode education and open learning Individualised learning Simulators Electronic performance support systems (EPSS) Intelligent tutoring systems (ITS) Distance and multi-mode "full channel" communication and interaction Experiential learning |
Audio conferencing and audio-graphics rely heavily on the conventional telephone network, although cordless microphones and other equipment can also be used. Audio-graphics has made it possible to transmit graphs and pictures to remote locations, and to use an optic tablet for drawing purposes. Audio conferencing represents traditional distance education technology, still widely used.
Telefax is a special problem from an educational point of view. It certainly represents a user-friendly technology when compared to e-mail, for instance. Besides, practically all fax machines communicate with each other, which is not true of e-mail systems. Naisbitt (1994) sees the difference between telefax and e- mail as the difference between evolutionary and revolutionary developments. Telefax is "a bridge between the old and new. Paper is slid into the fax machine much like being slid into an envelope. And the fax machine is dialed the same way the phone is. It is familiar and comfortable. Electronic mail, on the other hand, is revolutionary and does not relate to paper, only to electronics" (Naisbitt 1994, 96). However, Naisbitt (1994) believes that as people play more and more to e-mail's strength, it will start to gain on faxes.
Computer conferencing has widened the educational perspective by giving more open access to distance and multi-mode (2) educational solutions. In fact, computer-mediated communication and distance education have a lot in common. In both of them, technology can take students into otherwise inaccessible environments (cf. e.g., Bruce 1989, 243). In distance education, the teacher's role is usually fairly active, while in computer conferencing teachers tend to become consultants or co- learners. One of the key issues to be studied further is whether some factual information or knowledge is being transferred through computer conferencing or distance education or whether more attention ought to be paid to interactional effort and process- led collaboration.
Saying that audio conferencing is largely based on listening comprehension, while computer conferencing leans back on written skills and reading comprehension is a simplification because the limits of the traditional language skills (reading, speaking, listening, writing) overlap extensively in the field of MICT. It has been pointed out elsewhere (e.g., Tella 1992) that e- mail, and computer-mediated chat in particular (e.g., communication in real time via computers), emulates the use of oral language though making use of a written mode (see also Chandler 1995 for an extensive analysis of computer-based writing). In fact, it is slightly surprising to notice that while oral proficiency is being underscored in many walks of life, at the same time some of the latest developments of MICT, such as cellular phones when connected to data transmission cards, have started using written messagerie (short written messages typed on the tiny screen of the phones).
From the research point of view, e-mailing and computer conferencing give ample opportunity to gather data electronically. Some of the traditional data analysis techniques, however, cannot be used when computer-mediated communication is analysed. Nordenbo (1990) points out some of the difficulties connected to these kinds of data analyses. Some analysis techniques and classifications are given in Tella (1994b, 52--62).
Computer-mediated communications systems apparently offer different kinds of educational opportunities. Boyd (1987), for instance, underlines (i) epistemological viewpoints, connected to discursive flexibility facilitated by electronic mail systems, e.g., bi- or multidirectional (from one/many to one/many) communication vs. unidirectional communication (from one to many only), represented by mass media; (ii) affiliative viewpoints (e.g., peer tutoring and long-term affiliations between students and their school, or among students), and (iii) the physical flexibility offered by computer-mediated communications systems (e.g., opportunities to study in more convenient places and at more convenient times).
The main focus in CAI and CAL and in traditional computer- based software intended for educational purposes used to be on the proceduralisation of the formal elements of the target language (cf. e.g., Cornu et al. 1990, 364), while in computer- mediated communication and in e-mail in particular, the computer serves not only as a learning tool but also as a tool for communication and information retrieval. As to the position of e-mail in a broader context, Naisbitt (1994) foresees, somewhat paradoxically, that there will be a shift of emphasis towards "virtual tribes" or "electronic tribes", as e-mail is a tribe-maker. Electronics makes us more tribal at the same time it globalises us. This vision refers to the fact that e-mail and communications networks are likely to make communication more networked but laying at the same time more emphasis on telelogic communication, i.e., on small group or specifically targeted group communication.
At the moment, CD-ROMs are gaining ground in the educational sector. More and more schools are purchasing CD-ROM or CD-I players and most of the modern microcomputers already have a CD-ROM player installed when shipped to the users. Basically, a CD-ROM gives an individual learner access to an enormous amount of hypertext-based information. On the other hand, users of communications networks also access CD-ROMs without paying much attention to it. For instance, many library databases are in fact on CD-ROMs which the user accesses through communications networks.
Multimedia and hypermedia are words which mean different things to different people (cf. e.g., Nix & Spiro 1990; Dede 1992; Galbreath 1992; Gayeski 1992; Hill & Wright 1993; see also Leino 1994). Lane (1993) describes how computer, video, and audio can be combined to multimedia in an interactional model. Chou & Matsuoka (1993) talk about an integrated learning system (ILS), which is used to create a computer-based information rich constructivist learning environment (CIRCLE), in which communications networks are made extensive use of and which involve real experts in the activities of the schools. The ambiguity of the word `multimedia' has also led to mild protestations, like in Braswell (1994), who through a parody points out several problems connected to multimedia at school (the dollar factor; just enough to be dangerous syndrome; the Holy Grail syndrome; you don't need much time; the purpose behind repurposing).
On the other hand, it may be worth while trying to find common features between intrinsic characteristics of multi- or hypermedia and up-to-date or communicative foreign language teaching. Liu (1994, 305), for instance, has identified seven features common to both communicative CALL (computer-assisted language learning) and hypermedia (TABLE 2).
EPSS is cautiously taken for a new category of educational software (Collis & Verwijs 1995). Barker & Banerji (1993; see Collis & Verwijs 1995, 6) define EPSS as "a custom-built interactive guidance and information support facility that is integrated into a normal working environment … with a range of different performance support tools, each one of which will have been selected in order to aid a particular job function". One of the EPSSes could, for instance, include all sorts of tools that would help a writer browse databases on the WWW, then retrieve relevant information with key words, integrate all necessary items into a piece of writing, and then write up a complete article, to be finalised and printed out. While these kinds of tools are being built, Collis & Verwijs (1995, 13) consider it most important to advance from a systems approach towards a human approach, which would include questions like "What are the user's activities and needs?", "How can electronic support help?"; "Balance between human and technological aspects?", etc. Finally, EPSSes should be integrated with electronic tools and environments the end user already is using for his on-going tasks (Collis & Verwijs 1995, 20).
TABLE 2. PARALLEL FEATURES BETWEEN COMMUNICATIVE CALL AND HYPERMEDIA (LIU 1994, 305).
Communicative CALL | Hypermedia |
• Program does not impose | • Nonsequential and nonlinear |
• Program judges more to provide | • Contextual |
• Student is in control | • High level of learner control |
• Student relates to subject matter | • Learners can construct |
• Student creates own learning experience | • Better accommodate learners |
• Student perceives task as motivating | • Multimedia aspects is a source for |
• Student views task as a novel activity | • Provide new experience. |
Desktop video (= a PC plus a video camera on top of it) is probably going to be substituted for ordinary video conferencing systems in situations where only a few persons are interacting with one another. One of the assets of desktop video software is that communication is possible on the Internet, while ordinary videoconference systems use ISDN lines. Still, both add the visual channel to the interaction between different sender and recipient.
Not much will be said in this connection about the last categories in TABLE 1, i.e., virtual reality, virtual computing, and ubiquitous computing. What seems to characterise all these technologies is learning becoming more and more experiential. Ubiquitous computing is a certain ideal concept (Naisbitt 1994), which implies that by the time this ideal has come true, computing will have been wholly integrated into all educational systems. Naisbitt (1994, 56) also argues that in the global economic network of the 21st century, information technology will drive change just as surely as manufacturing drove change in the industrial era.
Some of the above components will be taken up in the next chapter from a different angle.
The Importance of Communication Channels
The problem of which channels are focused upon when discussing the question of communication is essential. In this context, three channels are often taken up, i.e., monologic communication (i.e., traditional mass media), dialogic communication (i.e., face-to-face human communication), and telelogic (or computer-mediated human) communication (cf. e.g., Tella 1994a, 45--48).
The role of communication can also be approached from other perspectives. We can for example talk about primary, secondary or tertiary communication. Primary communication can be considered as face-to-face communication, mostly in real time, for instance when having a dialogue with somebody or when delivering or listening to a speech.
Secondary communication can be represented by a number of art forms, such as theatre, while tertiary communication consists of mediated communication or, in more general terms, of computer- mediated human communication (CMHC).
Dimensionality of Communication
Another way of looking at communication is the dimensionality or directionality of communication, in which several components can be distinguished.
Basically, if communication is intended to go in one direction only, we can refer to it as one-way or monodirectional (unidirectional) communication. Two-way (bidirectional) communication emphasises the fact that both the sender and the recipient of the message can play an active role in the communication act.
Electronic mail is often considered as an example of bidirectional communication as the roles of both communicators (transceivers) are essential. A third dimensionality contains pluridirectional communication, such as computer conferencing or networking in general, in which communication is spread on several levels of communicators, either simultaneously or asynchronously, i.e., independently of time and space.
There is an eminently clear demand for clarifications as far as the classification of various combinations is concerned. In this article only a few anticipatory examples will be given. An example of primary unidirectional communication would include giving an order, while an example of primary bidirectional communication could be a dialogue. In foreign language teaching, primary unidirectional communication often characterises teacher-centred lessons, during which the teacher tends to limit his use of foreign language to so-called "teacher-talk" commands. The primary target should, however, be primary bidirectional communication, in which the teacher, for instance, aims at more communicative use of the target language, relying on learners' language proficiency as much as possible.
Secondary unidirectional communication is mainly represented by traditional mass media communication, i.e., monologic communication media as they were up till the mid- and late- 1980s. Secondary bidirectional communication includes telecommunications, CD-ROMs, CD-Is, etc. Network-based communication can be depicted as secondary pluridirectional communication, which, at the moment, is growing fast.
Another way of dividing communication is through the communication channel vs. its representation. This division includes oral production (via the phone, for instance), textual production (e.g., letters, faxes), textually produced but emulating oral production (e.g., e-mail), and iconic production (e.g., graphics, audio-graphics).
Two other dimensions are worth mentioning as well: the continuum from stationary to nomadic and from isolating to networking.
Educational technology, in the traditional sense, was mainly synchronous and stationary. A language laboratory represents traditional educational technology at its best--or at its worst. However, technological progress has advanced towards more nomadic products and media, which are independent of time and place, asynchronous, transferable, light, inexpensive, and which gradually become invisible.
Traditional technology also implied isolation, to some extent at least. Language labs as well as traditional computer laboratories used to isolate learners from others while giving all of them a rigid seating order. Portable computers and laptops started to change the situation, which, thanks to cellular phones and datacards is utterly different from what it used to be.
Virtual reality is not yet here to a tangible extent but it is not difficult to grasp what thought-provoking opportunities it will give to, say, foreign language learning as soon as virtual excursions to various target countries become feasible. One step onwards will most probably be virtual computing which, according to some researchers (e.g., Naisbitt 1994) will lead to ubiquitous computing in the 21st century.
The Question of Cost/Effectiveness
"Education has by far the lowest level of investment in technology of any major sector of the economy. (Education invests an average of $ 100 per worker per year in technology, compared to an average of $ 50,000 per worker per year in many industries.)" (Kelly 1988; see Salisbury 1992, 9)
Educational technology has also been handicapped by economic restrictions submitted to schools. The question, basically, is about how costly or how affordable certain communications media are. For the time being, communication via satellites is costly while video conferencing, especially desktop video, such as CU-SeeMe type of software, is becoming affordable at various levels of schooling. What this basically boils down to are the videophones which give full connectivity to the Internet but cost just roughly 10 % of a full-scale video conferencing system. The Kilpisjärvi Project (Husu et al. 1994), however, clearly demonstrates that a full video conference system using ISDN lines can also be fairly affordable.
Audio conferencing and telephony in general have been studied in quite a few countries for years now. Audio conferencing, when combined with audio-graphics, for instance, has proved a success even if in the conventional solution two telephone lines are needed.
At the most affordable end of the scale, educators have had recourse to videotapes, audiotapes, cassettes, minidiscs, etc. These represent conventional if not widely and universally available technology which few educators fight against.
Power and Responsibility
Power, authority, and responsibility--both assuming responsibility and allowing others to assume responsibility--are delicate issues in communicative media education. Power as exercised by the teacher is sometimes divided into four different categories: authoritative, autonomous, authoritarian, and abdicated power (Underhill 1989; see also Ahtee & Tella 1995). As far as MICT are concerned, power is constantly a topical issue which should be studied carefully in various research settings.
Another way of thinking of power is the continuum from monologue to dialogue and polylogue. In order to facilitate dialogue, the teacher has to be able and willing to give up his own monologue. Dialogue will change into an authentic polylogue only if constructivist principles are respected and mutual discussion encouraged. Polylogue is basically a fundamental way of conversation and an exchange of ideas in computer conferencing, but it can be hampered if the technical or pedagogical manager of the conference occupies too much scope of the conference's virtual space. Consequently, the question can be traced back to the role of monopoly or the teacher's (the manager's authority) in comparison to authentic dipoly (two communicators' authority) or tripoly, in which several communicators respect each other's right to express themselves. In many respects, this is closely connected to the question of electronic leadership (cf. e.g., Tella 1994a, 61), which appears to be one of the most important factors as far as the success of an electronic conference is concerned.
At Once Individualised and Networked vs. Mass Communication
Most of the above factors are also linked to the question of whether communication and instruction are being individualised or oriented towards mass communication. Modern information and communication technologies give access to individualised learning tools and make it easy to organise an individualised or learner-focused learning environment which, however, takes advantage of extensive physical and conceptual networking. In the same way, dyadic forms of work are facilitated as well as small group constellations. As stated above, telelogic communication emphasises small specialised group communication or target group communication, while traditional mass media (radio, television or the so-called electronic media in comparison to telematic media, such as e-mail, computer conferencing, WWW, gophers, Archie, FTP, etc.) aim at reaching large audiences.
The benefits of networked computers are accepted as a new starting point. Sawyer (1992) cites America 2000 (1992) in that "[b]y far the most promising benefit of networked computers for education lies in extending the network to include access to computing from the homes of teachers and students. This would address head-on one of the most difficult roadblocks to educational improvement … Non-classroom access to a significant learning tool would be especially important in the case of disadvantaged students … access to a personal computer in the home … could provide a powerful alternative to entertainment television or the street. If students had home access to personal computers integrated into the curriculum and linked to the schools' library resources, they would be able to extend their classroom exposure by repeating lessons missed in class, retrying simulation-based `lab experiments,' and preparing homework using the computer for library access, word processing, drawing, music study, and computation. … The home time available to students during each week of the school year could easily equal the time spent in the classroom. During the summer vacation, the same computer facility would be available for educational gaming, electronic mail, summer projects, and adult literacy or other training programs." (America 2000, 1992)
Communication/Education/Teaching
In the final analysis, the question is about the subtle differences between media communication education, communicative media education and educational media communication on one hand, and about communication as communication or communication as educational communication on the other hand, and, in the final analysis, between teaching, studying, and learning as well as about the shifts of emphasis in learning paradigms, concepts of learning and knowledge in connection with the changes in teachers', students' and schools' status (TABLE 3).
TABLE 3. A SUMMARY OF CHANGES AND SHIFTS OF FOCUS IN LEARNING PARADIGMS, THE CONCEPT OF LEARNING AND KNOWLEDGE, PEDAGOGICAL APPLI-CATIONS OF MICT, COMMUNICATION, AND IN THE STATUS OF TEACHERS, STU-DENTS, AND THE SCHOOL (BASED ON TELLA 1994a, 71Ð72).
| Learning Paradigms |
|
Instructional | Revelatory, | Emancipatory, co-operative, experiential |
| Concept of Learning and Knowledge |
|
Behaviourism, objectivism | Contextualism | Constructivism, constructionism |
| Pedagogical Applications of MICT |
|
ADP, computer-assisted instruction/learning (CAI, CAL), computer-managed instruction (CMI) | Computer as a tool, simple learning environments | Open, multimedia-based, network-focused and knowledge intensive learning environment, virtual school, virtual reality, virtual/ubiquitous computing |
| Communication |
|
Intrapersonal communication, low communication proximity | Monologic, mass communication, dialogic (interpersonal) communication | Telelogic communication, small group and target group communication, high communication proximity |
| Teacher's Status |
|
Information distributor, controller, judge, "sage ‘on the stage’" |
| Consultant, guide, coach, co-learner, facilitator, "guide ‘on the side’" |
| Student's Status |
|
Passive, reactive assuming little responsibility |
| Active processor of information, proactive constructor of one's own knowledge capital, responsibility-conscious |
| School's Status |
|
Physical school building, separate classrooms, subject-centred Lehrplan | Symbiosis of physical school and virtual school, getting networked on a global scale | Virtual school, close contacts with surrounding society, multimedia-based workspaces |
Communication, implicitly, covers everything as everything is communication per se, verbal, non-verbal or at least paralingual or extralingual. Teaching may include studying and learning, but there has been a strong shift of emphasis from teaching-based approaches towards learner-centred or learner-sponsored approaches. When speaking about open multimedia-based and networked learning environments or about virtual schools (e.g., Tella 1995), we have to bear in mind that the word `learning' in the English language also implies studying (3), while in Finnish it mostly refers to the product of studying. Or perhaps we should start to underline the importance of education as an overall term; speaking of primary, secondary, adult and lifelong education may also lead us to start talking about computer-based education with a special emphasis on computer-mediated communication, which seems to cover most of the features human-to-human communication has always included. Consequently, open learning and distance education are terms whose role and significance are likely to increase in the years to come.
"The context for human development is always a culture, never an isolated technology."
2. This report presents a framework for thinking about the artist as an actor in the innovation process in information and communication technologies. The framework differs from most approaches to the interactions between the creative arts and techno-science in two ways. First, it attempts to identify and characterize the range of innovative outcomes and the factors that shape them along multiple dimensions -- aesthetic, technological, scientific, economic -- and time frames, both long and short. Second, the framework stresses the importance of a new class of hybrid innovative institution, the studio laboratory, where new media technologies are designed and developed in co-evolution with their creative application.
The research is informed by an overview of contemporary studio-laboratories, a historical case study tracing the build-up of a strong digital media capability in Canada, and a review of literatures bearing on the sociology and economics of innovation. Numerous individuals artists, researchers, theoreticians and policy-makers have been consulted. The framework presented widens the way contemporary artistic practices are understood by placing them in the context of innovation studies; and in turn, it broadens the way in which the literature on innovation has up till now addressed the contribution of the creative artist in the digital media design and diffusion process.
The report is organized in a series of short thematic chapters, each treating in a different way the common thesis unifying them: that in the emerging digitally networked society, the creative arts and cultural institutions in general are mutating by forming a constellation of productive relationships with the science and technology research system, industry, humanistic and social science scholarship, and with emerging new structures of civil society. This apparently rising density of communication suggests the need to begin rethinking some aspects of the relationship between cultural support policy, innovation and research policy, and the still nascent but interconnected set of concerns about the requirements for widespread creative participation in a "techno-sphere" increasingly shaped by fast-changing digital media technologies. The concluding section identifies a set of possible interventions and topics for further study, though the phase of research does not permit the preparation of detailed designs or proposals for specific measures.
Cultural theorists will no doubt recognize the shifts briefly alluded to as continuous with a progressive reduction throughout the 20th century of the so-called autonomy of the artist as an alienated or estranged figure existing on the margins of society. Particularly among groups who have defined their "art" more or less in terms of technological innovation, this turn away from the Enlightenment notion of the aesthetic as the "disinterested play of the senses" can sometimes provide the material basis for establishing sustainable linkages with highly charged sectors of the global economy -- the entertainment and information industries -- and their associated scientific and technological bases. But it would be a mistake to consider the breadth of these shifts only as a widening of the well-established role of creators in industrial design to include such relatively new, trendy factors as "interaction design" or "relationship technologies". As art historians have pointed out, the movement of the machine into the studio is a progressive one which can be variously traced to the early 20th century avant-gardes, but in particular, a marked tendency since the 1960s to engage critically with the "technological sublime" as both material and subject-matter.[1] This critical orientation, at least among some of emerging "media-art and technology" community, is part of what makes the phenomena difficult to describe from a singular disciplinary perspective. Works conceived to make a conceptual or critical point by re-appropriating simple or older techniques can be misread when only evaluated in terms of technological novelty; just as, conversely, the point of "speculative" technological invention may at times be missed by developers seeking only incremental innovation understandable in terms of existing markets and users.
Similarly, the sites of innovation with which we will be concerned in this report, "studio-laboratories", need to be understood as emergent formations fed by, and flowing into artistic, techno-scientific, economic and discursive sources. This anti-reductionist approach is unavoidable, given the complexity of interests in and about digital media today. While we aim to characterize a wide range of linkages between art, science, technology and society through digital media, the emphasis will be on identifying those "pathways to innovation" with the greatest potential benefit to the widest number of actors. Somewhat differently conceived, pathways are perhaps better understood as configurations, since multi-finality is taken for granted in the phenomena being discussed. As such, the approach will contrast sharply with other current stances towards the "unity of knowledge" question that continues to be widely debated on both sides of the postmodern divide. For instance, the socio-biological project of E.O. Wilson proposes to bring the arts and their interpretation safely within the purview of contemporary neuro-science, explicitly aiming to demystify the "truth and beauty" of the arts in terms of epigenetic regularities yet undiscovered. Notably, Wilson's consilience, a term for transdisciplinary coherence, dismisses the messy hybridity of today's "unpleasantly self-conscious form[s] of scientific art or artistic science" [2, 211]. Self-conscious or not, it is precisely towards these intermediary zones -- open to the logic of "both-and" rather than the categorical closures of "either-or"[3] -- that we must turn to make sense of the otherwise baffling multiplicity of today's creative practices and institutional forms.
In 1974, pioneering electronic artist Nam June Paik assumed the role of technological forecaster and submitted a report to the Rockefeller Foundation urging the construction of a global "broadband telecommunications infrastructure"[4]. While critical of mandarin intellectual disdain for mass media, surprisingly Paik did not even bother to advocate spending on the avant-garde arts, or on the promotion of the work of his fellow video-artists. Rather, he envisioned a two-way, high-capacity video and data network — the "electronic superhighway" — that would augur a profound cultural shift. In the framework of this now familiar wired world, artists and intellectuals would have the opportunity to make a broader social contribution, what he called "output capacity", beyond the convention-bound production of luxury cultural goods for limited circulation.
This broader role was to "humanize technology", according to Paik, a more complex social implication that follows his consideration of the artist or intellectual in the context of then-current notions of the ‘post-industrial society’. Paik drew on Daniel Bell for his understanding of art as information, and John Kenneth Galbraith to underwrite an increasingly central role for the arts as a factor in economic growth. He conceived an amalgam of media, information, knowledge and communication, serving as "a lubricant and impresario to facilitate the relationships and cybernetic interaction of the society of the future".
Now, twenty-five years later, much of the infrastructure aspect of Paik’s vision seems to be in place, owing in large measure to the incredibly rapid uptake of the internet for multimedia as well as transactional communication. The kinds of immediate benefits Paik foresaw an electronic superhighway providing, easily distributed educational programming and greater connectivity for work and pleasure, are becoming commonplace for the growing members of the "virtual class". The falling costs of hardware, coupled with relatively cheap or free software, make the barriers to entry for creators lower than they were in Paik’s day, when he was one of the earliest to adopt portable video equipment and to devise his own techniques for electronically processing images. And today digital media are widely understood to be facilitating, as Paik predicted, new and varied kinds of relationships and not only between buyers and sellers, teachers and learners, creators and audiences. Further, they have attracted the participation of a significant number of the very cultural élites whose disdain for the public television of the 1970s Paik took pains to criticize in his report.
Yet from the vantage of the late millennium, it is no longer possible to share Nam June Paik's optimism about the wonders of global connectivity, nor, from an analytical standpoint, his deterministic belief in the sufficiency of technological infrastructure for stimulating a widespread culture of active producers of new creative expression. The internet repeats aspects of the early history of radio broadcasting [5] with the growing consolidation of corporate interests at the high end of broadband and advanced applications; cultural applications of interactivity have bunched up around a relatively narrow group of heavily promoted large-market entertainment products (even if, in some cases, they are played online in technologically innovative multi-player configurations); and thirty year-old visions of new kinds of computer-enabled literacy, extending sensory acuity and augmenting intellectual capacity, seem to be more stalled than spurred by the current market frenzy around media technology. Most crucially, in the 1970s, Paik was not yet in a position to address the key issue of how to bridge the new skill-sets associated with digital technologies with existing, often age-old capabilities grounded in embodied, locally specific practices.
Software indeed has a dual nature, as both medium and tool; practices cannot transcend the limitations of the constraints built into software tools, unless these are reflexively designed to permit extensible, evolving development in the process of use. This is not just the familiar problem of market power exerted by the dominant position of a few large software companies, whose application packages define a de facto standard that, for better or worse, tends to be accepted as the benchmark of digital literacy. In the arts community, too, disquiet rises among the more reflective, like Carnegie Mellon professor of both art and robotics, Simon Penny [6]:
"…every day we come to new reconciliations between our artistic goals and methods and the requirements and restrictions of the machines we work with. With a little critical distance, we can see that we are reshaping artistic practice to suit a new set of tools."
Yet these concerns, which have circulated uneasily among the electronic art, music, and graphics communities since the 1980s, are rarely considered in relation to those of the apparently opposite end of the technological spectrum (and world) -- the digitally disenfranchised, to whom, typically, technological capability is presented as nothing else but the adoption of a set of pre-set, externally-defined "solutions". Yet the same questioning can illuminate both sides of the spectrum: how can local, contextually-relevant capacities be developed, which at once build on but also provide the potential to transcend the existing media ecology? Manuel Castells, addressing the culture of the network society, insists on the need to look for and understand the "specificity of new cultural expressions, their ideological and technological freedom to scan the planet and the whole of humankind, and to integrate, and mix, in the supertext any sign from anywhere" [7]. This cultural specificity, or capacity to adapt material means to self-defined expressive uses, is by no means a given result of technological deployment, on the one hand, nor of the transmission of pre-existing messages through digital channels, on the other. If the image of digital expression as a "dynamic, moldable medium" dates back to the early years of the computer era [8], its reality is not a lot more widespread now than it was then.
This report on Pathways to Innovation in Digital Culture will concentrate, as Nam June Paik put it in 1974, on those configurations with the greatest potential for "humanizing technology". But it will also take careful heed of the various skeptical voices who over the ensuing decades have developed a paradoxically "post-humanist" stance towards the liberating potential of human-machine communication and expression. After Donna Haraway's celebrated feminist "manifesto for cyborgs", or more recently Katherine Hayles tale of how since cybernetics "we became post-human" [9], there is no need anymore to rehearse familiar myths of empowerment in terms of the "liberal unified humanist subject". The vision of human expression seamlessly articulated with intelligent machines, pleasing to few adherents of art's proudly transcendent claims to Truth and Beauty, nonetheless provides a basis for building fruitful understandings between the diverse social actors with interests in the shaping of digital media -- researchers, technology developers, artists, and theorists. Increasingly, it appears that these meetings are taking place within innovative institutional structures -- spanning organizations, research networks, and projects. And it is to these sites -- the "studio-laboratory" for combined art production and technological research -- that we now turn.
2. Transdisciplinary Knowledge Production and the Arts
The concentration of scientific research in structurally distinct industrial or institutional laboratories dates only from the later 19th century. Current scholars describing what are now termed "systems of innovation" have pointed out common trends, as well as national differences, in the transition from pre-industrial to the more familiar industrial and now post-industrial organization of research and development. During the first of these phases, it is sometimes overlooked how strong was the artisanal component -- mechanical skills, like spatial imagination, dexterity, and fluency with materials -- in enabling early industrial innovation. With the spread of advanced professional university training, as well as the formation of scientific and engineering societies, the specialized research and development laboratory became increasingly common in the early 20th century, bringing disciplined scientific knowledge to bear on industrial problems. With important national differences, the role of the state was always crucial, particularly in steering priorities towards the military, health, and particular industrial sectors [10].
After World War II, and the decisive impact of the mission-oriented Manhattan project in the U.S., the distinctions between "pure" scientific knowledge from its "applied" technological development began to erode. Not just the close interaction of multiple branches of science was at work here, but also the importance of new developments in technology, and especially instrumentation, in setting the very research agendas for science. A compelling, if somewhat stylized interpretation of these complex shifts distinguishes between two concurrent "modes of knowledge production".[11] Gibbons, a former director of the Sussex University Science Policy Research Unit (SPRU), along with an international team of social scientists, calls traditional discipline-bound R and D "Mode 1 knowledge production". He summarizes the emergent second mode in terms of a set of key trends:
- Transdisciplinary. Further than inter-disciplinary work, in which different fields address separate problems inside a common framework, transdisciplinary research involves a stronger "interpenetration of disciplinary epistemologies". Effectively, this means new fused horizons become possible, beyond or transcending paradigms existing within single disciplines. Consciously pursued, transdisciplinarity is an approach to problem-solving suited to settings where disciplinary modes prove inadequate.
- Multi-site. More numerous organizations become involved as partners or collaborators in research, making the process more socially distributed as well as heterogeneous. Scientific discovery becomes more collective, as evidenced by publication authorship, and it becomes more organizationally diverse: hospitals, institutes, user-groups, consortia, networks, etc.
- Applied. Gibbons et. al. classify much transdisciplinary research as "essentially a temporary configuration and thus highly mutable. It takes its particular shape and generates the content of the theoretical and methodological core in response to problem-formulations that occur in highly specific and local contexts of application".
- Reflexive . Social accountability becomes more important in determining research agendas; furthermore, greater inter-communication between fields tends to foster a higher degree of self-awareness in defining and explaining disciplinary frameworks.
In the arts and humanities, transdisciplinarity has had a different career since 1850. Nineteenth-century sensibility was decisively rocked by the Wagnerian notion of the total work of art -- the Gesamtkunstwerke -- which, in an abstract sense, can be understood as initiating a movement towards more expansive and deliberate synchronization of the separate disciplines of the arts into new synthetic combinations. The legacy of this creative and conceptual innovation was a radical way of thinking about artforms or media in terms of the inter-relatedness of their codes or constituent parts. By the second decade of the 20th century, and alongside the rapid growth of mass industrialization, the conceptual scope of some artists and cultural theorists extended still further, to embrace "art and technology [as] a new unity". This 1922 slogan of Walter Gropius, from the Weimar Bauhaus, underlined a strongly applied socio-technical project to shape the quality of mass reproduced designs with all the imaginative resources of the contemporary creative spectrum -- not excluding abstract art, modernist music, architecture, and theatre.th century; its technological realization, with the diffusion after 1945 of electronic and telematic media, provides an often neglected connecting thread between today’s virtual worlds of interactivity, and those of the early 20th century avant-gardes.
These basic shifts in culture, touched upon all too briefly here, are rarely seen as pertinent, even conceptually, to the changes in knowledge production previously summarized. Gibbons' treatment of the arts and humanities identified some aspects of Mode 2 processes, like the increased role of instrumentation in the humanities (e.g. the use of the computer to produce theoretical models) and what is called the "re-shaping of aesthetic response"[11]. But overall, he remains ambivalent about the way in which artists and humanists fit into the new mode of knowledge production. They are described as:
"…standing aside as quizzical commentators who offer doom-laden prophecies or playful critiques, and as performers who provide pastiche entertainment or heritage culture as a diversion from threatening complexity and volatility. In other senses, they are even more deeply implicated: through the culture industry, they fashion powerful, even hegemonic images, and through higher education they play a direct part in the new social stratification." (110)
This report will demonstrate a set of closer affinities, by looking at the growth of what we have designated the "studio-laboratory", as a site within or through which artists, scientists, technologists and theorists commingle. In a study commissioned by the French Ministry of Culture, Norman [13] has previously profiled a dozen current European cultural laboratory and media centres where "transdisciplinarité" contributes to the "creation of new aesthetic forms" grounded in development of new technologies. Besides transdisciplinarity, this study confirms a marked tendency towards multi-site co-operation and, among several cases, a strong vocation to serve as a bridge between social needs (often expressed as "the culture of the network society") and the technology development process.
A 1996 conference Art@Science, sponsored by the Japanese research consortium ATR, has produced a collection of papers which, among other things, reinforces what Gibbons might call the interpenetration of applied ("artistic") and theoretical ("scientific") components in the Mode 2 research context.[14] The conceptual framework for this contribution, at least at the editorial level, tends however to stress a putative "convergence" between art and science, rather than the more contingent, evolutionary models implied in Gibbons' notion of Mode 2 knowledge production.
The rest of this chapter considers the studio-laboratory phenomenon in relation to the wider dynamics of contemporary research. The first part interprets the growth of studio-laboratory settings since the 1960s; second, their historical emergence in relation to a common classification of types of innovation; and third, an introduction and brief description of a diverse illustrative range of studio-laboratories and related structures.
In recent years, scholars have begun to unpack some of the persistent habits of thought which have tended to construe art and science as dichotomous. Caroline Jones and Peter Galison, respectively historians of art and of science, summarize the aim of a recent collection as moving beyond the "focus on ‘art’ and ‘science’ as discrete products," to look at "commonalties in the practices that produce them." [15] Still, little attention has yet been given to the institutional development of the contemporary studio-laboratory. Three overlapping phases may be distinguished.
In the first phase, dating from the 1960s and 1970s, artist centres, networks, university-based institutes and public sector labs were established to support open-ended exploration of new and emerging technologies by artists. Among the most celebrated examples was Experiments in Art and Technology (E.A.T.) founded by artist Robert Rauschenberg and Bell Labs physicist Billy Klüver in New York in 1966. The goal of E.A.T. was to establish "an international network of experimental services and activities designed to catalyze the physical, economic and social conditions necessary for cooperation between artists, engineers and scientists." The research role of the contemporary artist was understood by E.A.T. as providing "a unique source of experimentation and exploration for developing human environments of the future."[16] At the same time, other Bell Labs scientists were also engaged in collaborative research, in computer graphics and vision, music and acoustics.[17, 18]
Also during the late 1960s, at MIT, the Hungarian artist and Bauhaus affiliate Gyorgy Kepes founded the Centre for Advanced Visual Studies, providing a stable location for collaboration between artists-in-residence and university-based scientists and engineers. In the 1970s, composer Pierre Boulez launched the IRCAM (Institut de Recherche et Coordination en Acoustique et Musique) in Paris, based on a dialectical conception of research/invention as the central activity of contemporary musical creation; not incidentally, Boulez invoked the ‘model of the Bauhaus’ as interdisciplinary inspiration for what he considered the inevitable collaboration of musicians and scientists.[19]
The relative autonomy of these new centres — in the case of IRCAM., established with a fiercely guarded aesthetic independence setting it apart as a modernist citadel— distinguish them from the more publicly oriented type of media centre that began to appear in the 1980s and 1990s. Typically incorporating festivals, exhibitions, commissions and competitions of electronic art, this second phase saw the increased commitment of both public administrations and private corporations towards exposing the most radical media-based creativity to a wider public. As festivals such as Ars Electronica or SIGGRAPH’s non-commercial art exhibition became global in scope during the 1980s, so plans were drawn up in most advanced industrial countries to establish permanent centres able to incorporate a dual research/development and public education mandate. To mention only a few of the most conspicuous of these institutions, the Zentrum fur Kunst und Medien (ZKM) and the NTT InterCommunication Centre were active in commissioning and publishing throughout the 1990s even before their physical centres were opened in 1997. The German philosopher and critic Florian Roetzer analyzed the ‘media centre’ bandwagon of the late 1980s, when he commented sardonically that "everywhere there are plans to inaugurate media centres, in order not to lose the technological ‘connection’…This new attention is supported by the diffuse intention to get on with ‘it’ now, the contents remaining rather arbitrary, so long as art, technology and science are somehow joined in some more or less apparent affiliation with business and commerce." [20] Roetzer was then not alone among critical intellectuals in harboring a deep ambivalence about these institutional developments, fearing that they would serve only to accelerate the public acceptance of automation in everyday life, on the one hand, and to co-opt artists — "with their purported creativity" — into becoming commercial application designers, on the other.
As it has turned out, explicitly designed linkages between art, research and innovation have developed a good deal beyond Roetzer’s cynical prognostications, and now form the basis for the third phase of the contemporary studio-laboratory. Many observers would probably count the MIT Media Laboratory as the main propagandist, if not initiator, of this phase, in spite of the secondary importance of artistic practice or input in its research activities. Xerox PARC since the early 1990s has prominently supported an in-house artist-in-residence program (though whose modest scale perhaps belies the extensive attention it has received). In the words of its manager John Seeley Brown, the program serves as "one of the ways that PARC seeks to maintain itself as an innovator, to keep its ground fertile and to stay relevant to the needs to Xerox"[21]. Other Silicon Valley, Japanese and some European private firms have followed suit, in differing flavors, though more or less in agreement with PARC's position that the traditional model of "corporate support for the arts" -- hands-off, patrician, and marketing-driven -- overlooks basic potentials for core innovation. Among cultural organizations, the Banff Centre for the Arts in Canada was early in initiating a major-scale investigation of "virtual environments" as a partnership with university researchers and industry sponsors.[22] Since 1995, research networks have begun to appear with the express aim of linking multimedia art with technological development and the social sciences. In short, the deliberate involvement of artists as collaborative researchers in innovation programs now takes place in a wide variety of social and economic settings, with a corresponding diversity of approach and program design.
Figure 1 below illustrates the increasing pace of establishment of studio-laboratory sites in the 20th century, which clearly shows a grouping of activity in or bordering the 1960s, and again, the 1990s. This pace has now reached a point where it is no longer conceivable to keep accurate track, particularly with the proliferation of all manner of "new media centres" at various degrees of sophistication and scope on university and college campuses, within corporations, as regional industrial development efforts, and as catalysts for public access and digital literacy efforts. Rather than even attempting a comprehensive listing of such sites, we will focus below on characterizing the range and styles of their approaches to innovation.
Before turning to this, however it will be useful to briefly consider the widening scope of the Research and Development process in the context of recent critiques of the so-called ‘linear’ model of innovation. This critique, undertaken since the 1960s by sociologists, historians, and economists of science and technology, makes explicit what Gibbons’ Mode 2 concept of knowledge production accepts implicitly: the inadequacy of the simple model of a one-way flow of ideas from basic science through applied research to development and commercial innovation. In the place of the traditional mechanistic model, evolutionary, interactive models emphasize the linking of inventions to markets, with significant stress on user innovation and the role of embodied skill — tacit knowledge — as determinants of innovation.
Economist Christopher Freeman distinguishes between four categories of innovation and their diffusion: incremental innovations, radical innovations, new technological systems, and changes in techno-economic paradigm.[23]
1. Incremental innovation involves small-step improvement of existing technologies or processes; as such it covers the vast majority of patents that are taken out in the world, as well as typical changes in product design or styling within industry. It is worth adding, in this particular context, that it also includes the bulk of contributions to scientific research. Indeed Thomas Kuhn, the philosopher of science whose book on the structure of scientific revolution brought the concept of "paradigm change" into common use, defined "normal science" as puzzle solving. Whereas within the arts, "innovation is a primary value, in science it arises only as a response to crises in established paradigms."[24]
2. Radical innovations are discontinuous events, going beyond variational creativity. In the oft-told explanation, no combination of horse-driven coaches could have produced the railway; so, for many artists interested in working with information technologies, the aim is often to explore or invent new media forms, as the ‘unit’ of innovative work, as opposed to working within established techno-cultural genres. It is worth noting how artists’ ideas about radical innovation since the 1960s have been in part shaped by the way in which Marshall McLuhan’s widely diffused discourses about "media as art forms" characterized experimental artists as prophetic. Although McLuhan was himself thinking mainly about the modernist writers and painters whose radical innovations (Eco's "open work") actually anticipated aesthetic structures now embodied in electronic media, the very notion of new media artworks as ‘perceptual training’ for yet-to-be-invented new media environments now has taken hold widely. This makes it possible, today, to consider the proliferation of user interface creations in aesthetic terms — much as McLuhan spoke of the content of new media in terms of the features of previous ones.
3. New technological systems involve constellations of interrelated innovations, both radical and incremental; as systems, they entail economic and social as well as technological changes. Examples include plastics and synthetic materials, in the 30s and 40s, consumer electronics in the 1960s, and digital networks in our time. Taking the latter case as illustration, changes are underway in how knowledge is technically produced and distributed, in models of education and life-long learning, in the globalization of finance, and the rise of electronic commerce. These interrelated technologies and organizational changes combine to produce ‘trajectories’, along which new innovations that would have been radical become incremental as the system matures. The idea of technological trajectory is closely associated with that of ‘path-dependency’, the familiar effect of ‘lock-in’ which takes place when new technologies and associated human skills are widely diffused[25]. Another standpoint on the reversibility of technological trajectories, perhaps more suited to the complex patterns of interaction between art and technology, is provided by the French sociologists of innovation associated with the so-called "actor-network theory". These scholars speak of "socio-technical dispositifs" - a set-up, or dynamic apparatus - which combine objects, both human and non-human, the conditions under which they are used, plus the means through which new entities or agencies in networks emerge. [26] From this anti-reductionist angle, constraints are in both things and people, and are both limiting and generative. Technological systems grow out of the co-evolution of actors and techniques during the conception and adoption of innovations [27]. Crucially, for the digital dispositifs under consideration here, it would appear that artistic conventions, craft routines, and related embodied practices can play an important role in the growth of new networks (or trajectories).
4. Changes in techno-economic paradigm refer to the so-called ‘long-waves’ of economic and social change which, according to some evolutionary economists, have articulated the history of the industrialized world in 50-60 year periods since the mid-18th century. Techno-economic paradigms are pervasive shifts, based on the arrival of new material inputs that are cheap, widely available, and revolutionary in impact. The current ‘Information Technology’ paradigm, by this account, was in preparation since the 1940s and 50s, but only began in the 1980s with the widespread and cheap availability of micro-electronics. (The previous mass-production paradigm began in the 1930s and 40s, organized around the cheap availability of energy supplies including oil.) See Figure 2 for a representation of the five waves of innovation since the 18th century, ending with the current wave characterized by "digital networks, software, new media".
As interpreted by social scientists such as Manuel Castells, the information technology paradigm provides the basis for producing a vast synthesis of current political, social, economic and cultural tendencies[28]; however, so far little attention has been given to what sectors may now be forming in preparation for the next techno-economic paradigm. It seems apparent from the vantage of the late 1990s that some combination of bio-technology and cheap bandwidth will likely form the basis in coming decades of the next techno-economic paradigm, distinct from but building on information technology. What philosopher Vilem Flusser already identified as an emerging "ars vivendi" in the late 1980s clearly signaled what is turning into a central issue for creators in the arts and techno-science, as we begin to imagine what it means to move beyond mere biological analogies to the practical construction of post-organic life.
Sampling of Studio-Laboratory Institutions and Structures
By juxtaposing the starting dates of studio labs against the five innovation waves, it can be shown (Figure 3) that they cluster around the rising portions of the waves. No rules or strong theories are meant to be implied by this observation. It is surely suggestive to think of the Bauhaus as catalytic in relation to the broader flow of innovation within the Fordist mass-production regime. Many of the studio-labs that appeared between 1950-65 dealt broadly with a range of material technologies, light, electronics, and kinetic or cybernetic systems. However from the standpoint of the aesthetic paradigms which they explored and defined, they could be understood as preparing the terrain for the new material possibilities afforded only by very powerful networked micro-processors, which only became a reality toward the mid-1990s. As will be seen in the following survey, the current studio-laboratories are active in all four of the categories of innovation previously introduced. Some, a distinct minority but noteworthy nonetheless, are oriented toward the issues and challenges associated with what may be a new emerging bio-techno-economic paradigm. For the most part, however, description here centers on the still far from exhausted potential of digital media (some would say, recalling the perennial "software crisis", barely tapped).
The studio-laboratory as a class is by no means homogenous. Some are privately funded by corporations, seeking to understand the properties of radically new media technologies via aesthetic R & D programs; others are public funded and linked to traditional museological mandates for public education; others are industrially sponsored pre-competitive laboratories based in universities; still other models are network-based and more or less explicitly tied to long-term state or regional industrial development objectives. The studio-laboratory can be understood as providing a site for an ongoing and progressive series of negotiations between artist-users and technology designers, which simultaneously shaped the technology, its use, and users.
The survey is divided in three parts. First, stand-alone institutions, divided into those with mainly cultural roots and funding bases; those located in and financed by private corporations; and government agencies or institutes. Second, network structures, of three kinds: research networks; networks linking cultural with socio-political organisms (civil society); and art production networks. Finally, a group of project-based initiatives is discussed. Two further prefatory notes: the sampling aims not at inclusivity, but rather representative breadth. Second, in each case, extensive online information is available, and the internet address is provided.
1. Institutions
R and D laboratories in publicly financed cultural organizations
L'Institut de Recherche et Coordination Acoustique/Musique (IRCAM) Paris
www.ircam.fr
As previously noted, Pierre Boulez founded IRCAM as a transdisciplinary centre for musical research, experimentation, and cultural diffusion. Since its founding in 1977 it has been at the forefront of experimental artistic practices involving electronic media. It has always employed a substantial scientific staff researching perception, material science of instruments, and developing software systems for musical production. While oriented in its first decade towards powerful, specialized resources only available to composers on site, it has since the late 1980s focussed more on diffusing its innovative software to a world wide user community. Several of its applications have been commercialized and are in wide use by musicians and other interactive artists. According to Norman[13], it has provided an invaluable template from which many of the more recent establishments have drawn their plans. However, the challenge facing IRCAM now is to remain current and establish relations with the many new centres/networks set up in its pioneering wake.
The Zentrum für Kunst und Technologie (ZKM) Karlsrühe, Germany
www.zkm.de
The ZKM is now the largest and widest ranging centre for art and new media in the world. With the first large-scale museum dedicated only to "media art" since 1945, the ZKM in some ways is playing a role in relation to emerging interactive art practices similar to that played in the 1930s by the Museum of Modern Art to photography. That is, establishing the field from a museological standpoint, especially with regard to the special problems of maintenance, education, and support for complex technological installations. Combining, as it does, both in-house research, production, as well as innovative forms of cultural diffusion, ZKM is from the standpoint of the density of its connections, the richest and most complex of current studio-laboratories.
Two institutes for research and experimentation are also located at the ZKM, one dedicated to Image and the other to Sound. The Image institute has in particular been influential by commissioning some 70 new works by international artists since 1990, many developed in-house and supported technically by staff engineers and researchers. The ZKM has also been actively associating itself with scientific expertise centres in Europe, through the European Union's Esprit program for long-term research. As well, it has developed similar productive links with other culturally-oriented media centres in Europe, such as Ars Electronica, V2. However, its very scale raises questions of sustainability. Financially dependent on state authorities, ZKM’s global program has already raised questions about relevance to local audiences and possibly also, business enterprise. So far, the artistic program at ZKM has been deliberately independent of industry sponsorship; pressure may rise for it to become more responsive to applied or sponsored research, a deeply controversial point at the time of writing.[29]
De WAAG - The Society for Old and New Media, Amsterdam
www.waag.org
The Society for Old and New Media exemplifies what might be termed a new breed of interventionist, policy-oriented public new media centres. The name signals its approach, which places both new and old media within a common framework, and one of its key tactics is to seek and amplify resonances, both historical and practical, between them. From a mediaeval-age location in central Amsterdam, it inverts the typical "high-tech" image of the research laboratory, in line with its program of driving technical developments with a rich mix of cultural and historical references. In its applied research programs, de Waag has so far emphasized the application of design and technical creativity to enrich the range of what can be termed the "public domain" of cyberspace. A clear example is its award-winning public internet interface, based on the 19th century Dutch reading table. Its programs include competitions, symposia, workshops and commissions; it grew in part out of one of the largest and most active "Digital City" internet sites in Europe, and from this has a strongly defined political and social program for defining a democratic "public domain" in the digital sphere.
The Society has played a European leadership role the policy arena, advocating the growth of broadly-based network of cultural innovation centres across Europe. Much of this material has been summarized in the recently published "New Media Culture in Europe" [40].
The Banff Centre for the Arts, Alberta Canada
www.banffcentre.ab.ca
The Banff Centre is unusual in its location in a remote, non-metropolitan setting, which fosters an intensive, residential structure of activities. Its interests in advanced media and technological development grew out of a deliberately interdisciplinary arts context, spanning music, theatre, literature and visual art. When it established a media research initiative in the late-1980s, one of its aims was to attract support from academic and corporate partners for in-depth investigations of emerging media by diverse teams of artists all working with the same research and development team. A second aim was to make space in the formative stages for dialogue involving cultural theorists, philosophers and other humanists normally estranged from the sort of active technological development engaged in by artists and scientists. The difficulties encountered in that unusual effort are further discussed below; see also [30]. Currently, The Centre operates a multimedia institute, offering a plethora of courses and seminars, but it has phased out the research intensive activities.
Ars Electronica Centre, Linz, Austria
www.aec.at
Founded in 1979 by Brucknerhaus and the regional television corporation of Upper Austria (ORF), the Ars Electronica festival was, at the time, the only annual showcase exclusively devoted to forms of electronic art. Combining the exhibition of works, the organization of conferences and the recognition of pioneering electronic-art producers (the "Prix Ars Electronica" were created in 1987), Ars Electronica figures as a foundational event on the international scene of contemporary art. In 1996, with the innauguration of the Ars Electronica Centre, Linz operates year round. The festival and the centre boast an impressive roster of corporate as well as its state-owned and institutional funders.
FAE Centre Director Gerfried Stocker defines the centre's mandate in terms of transdisciplinarity, by which he means, transfer of knowledge between practices and disciplines. However, Jutta Schmiederer, the FAE's Producer, also stresses Ars Electronica`s role in dessiminating knowledge and use of new media by encouraging the local and international comunity to engage with and transform those technologies. [13] These dual emphases reflect Ars Electronica's dynamic as a whole. The "Lab of the Future" project works to develops advanced 3D animation and internet technologies, while concurrently exhibiting recent products. The coexistence of this kind of display and simultaneous practice lends Ars Electronica an unparalelled internal vitality.
Art-labs in private sector firms
www.artcom.de
Art+Com operates as a research and development centre for computer aided visualization and design. What distinguishes it from purely industrially-oriented labs carrying out sponsored research is its emphasis on research on "the new media grammar"; i.e. according to its chief Joachim Sauter, "how to use computer as a medium, not a specific tool". Grammar is understood as the expression that is "inherent" to the new technology. Art+Com maintains a balance of sponsored and internal research projects; the former, including visualization systems for firms such as Daimler Benz. Of the latter, a good illustration is a "grammar defining" project called Zerseher.
"The observer finds himself in a museum environment, a framed picture hanging on a wall. Upon coming closer, the viewer notices that exactly the spot of the picture he is looking at is changing under his gaze."
This work makes clear the distinction between the computer as a simulated paint brush (tool) and as an inherently interactive medium. Art+Com’s celebrated TerraVision simulator (1994) linked various satellite views of the earth with visualization systems, giving the user a continuous zoom-in from space.
Xerox Parc Artist in Residence Program, Palo Alto, CA
www.parc.xerox.com/red/members/richgold/PAIRBOOK/pair1.html
Since 1993 XeroxParc’s Artist in Residence Program has provided Bay-Area artists with the opportunity to carry out their own projects in the corporate lab, collaborating with like-minded scientists on common projects. Pairings are voluntary, and the structure oriented toward process rather than product; in no case are artists required to implement ideas of scientists, or vice-versa. PAIR is understood to help the laboratory remain relevant to the needs of the corporation by encouraging artists to experiment about the future forms and paradigms of documents. As John Seeley Brown writes "Xerox is, after all, the Document Company and what artists fundamentally make are documents, and in particular, new forms and genres of documents. Artists are really document researchers discovering new kinds of documents… even new definitions of what constitutes a document." [31] The program founder, Rich Gold, was an avant-garde music composer before he entered the computer industry through games design; he says:
PAIR is not based on the belief that each person must be both an artist and a scientist, though such people exist, but rather that there is a class of extraordinary activity that a scientist and an artist can simultaneously engage in that is mutually beneficial to both.
Nippon Telegraph and Telephone (NTT) InterCommunication Centre (ICC), Tokyo
www.ntticc.or.jp
The ICC was opened in 1997 as part of a large scale Shinjuku cultural complex, through the initiative of the Japanese Public Association for Telecommunications, and sponsored by NTT. ICC is conceived as a prototypical "information network oriented arts and science interface" — a new kind of museum for the 21st century depicting "a vision of life in a post-industrial society". The term "intercommunication" signifies the inter-linking of art, techno-science, and society. NTT sees its sponsorship of this cultural project as contributing to "thematic communication" — imagining new uses for future technologies, and it looks forward to ICC offering "exciting feedback into the world of technology". Like the ZKM, the ICC maintains a permanent collection of media art works on exhibition, all highly participatory, interactive works that exemplify formal openness and multi-sensory immersion. The centre also has a laboratory wherein artists and engineers collaborate in the production of electronic art works.[32]
ATR Corporation, Media Integration and Communication Centre, Kyoto
www.mic.atr.co.jp/index.e.html
ATR International, a consortium of seven research centres devoted to telecommunication, set up the MICC in 1995 for studies in art and communication. The research laboratory is divided into four units: the reconstruction and creation of communications environments, the foundations of communication, the expression and transmission of mental images, and finally, the process of human communication. Interactive Art is of central interest to this lab, as a domain through which engineers are researching the base technologies for representing/transmitting human emotion (‘kansei’, or sensitivity). Effectively, the approach is to develop models for machine "understanding" of gestures, images, and speech. Collaboration takes place both ways: "Artists present a new concept and engineers provide technologies to realise it…Engineers present a whole concept, and artists produce the art part" [33]. A group of four media artists work in a fifth ‘art and technology’ unit. The goal is that sophisticated communication and interaction methods will be discovered that "overcome the cultural and language gaps among people".
Interval Research Corporation, Palo Alto CA
www.interval.com
Only a stone’s throw from XeroxParc, Interval mixes artist-researchers into an already very broad ranging scientific and engineering research staff. Its charter is to look five to ten years into the future of computing and media. Rather than the open-ended, voluntary pairings of the PAIR program at PARC, Interval includes a sprinkling of researchers with backgrounds in such fields as interactive art, theatre, documentary film. David Liddle, the manager of Interval, sees them adding cognitive diversity through their unique standpoints. By bringing in "alien methodologies", he notes, "most of these people are the herb, not the entrée, in the particular project being baked [but] the minor ingredients … are very, very important. There is no chance of doing good, new work in these areas in a sterile environment where there are no herbs allowed" (quoted in [34]). The noted media artist Michael Naimark recounts how, as a member of Interval research staff, he and computer vision researchers nurtured a symbiosis in which 3D stereoscopic computer models based on panoramic landscapes he gathered for an art project provided the researchers with valuable material. "The fact that it was not simply ‘views of the parking lot’ was gravy". (From seasonings to sauce…)
www.canon.co.jp/cast/
Founded in 1991 the ArtLab is a corporate lab devoted to the integration of the arts and sciences, primarily by encouraging new artistic practices using digital imaging technologies. The lab itself consists of offices and a "factory"; the latter employs computer engineers using Canon digital products in interaction with artists in residence to produce new digital art works.
Since its launch, the studio portion of the program has presented exhibitions of the works developed in-house. In 1995, seeking to introduce multimedia works to the general public, the ArtLab began its Prospect Exhibitions program, which also circulates the work of multimedia artists and creators from a variety of new media centres. Workshops and lectures on new communications technologies and practices are also organised on an ad-hoc basis, which are both national and international in scope.
University/Public Sector Studio-Laboratories
German National Research Institute for Information Technology (GMD), Bonn
Institute for Media Communication
viswiz.gmd.de/fleischmann
The GMD Institut für Medienkommunikation is composed of four departements: Visualization and Media Systems Design (VMSD), Multimedia Applications in Telecooperation (MAT), Networks (NW), and Media Arts Research Studies (MARS). The centre is primarily a teaching and research facility, hosting a number of innovative projects which actively integrate the technological and cultural innovation in the development of new media forms and content.
The centre’s artistic direction was set by VMSD Director Monika Fleischmann, along with architect Wolfgang Strauss. However, a good deal of the initiative to integrate artistic and technological innovation can be traced to the efforts of Wolfgang Krüger. With the goal of building a cultural perspective into technical development and technical expertise into artistic practice, Krüger came from the Berlin centre Art + Com to join the GMD centre in the early 1990s. Having left the institute in 1995, Krüger’s legacy of interdisciplinary practice remains nonetheless.
According to the visions set forth by both Krüger and Fleischmann, the pure functionalism of technological research should always be undercut by the cultural meaning or purpose of what is being developed. In the case of the development of new media tools, innovative products should necesarily be in the service of the expressive and aesthetic possibilities defined by cultural producers and creators. As such, for example, the centre’s "VizWiz" (Visual Wizards) group developed new digital tools, such as the Wall of Communication, a sort of virtual billboard which permits multiple users to post their images and ideas during teleconferencing sessions, or the Responsive Workbench, a table which serves as an interactive projection site for group work, where multiple audio and visual feeds can be cohesively integrated.
Electronic Visualization Lab, University of Illinois, Chicago campus
www.evl.uic.edu
Since its inception in 1973, the EVL has established itself as a centre for academic excellence in the development of computer graphics and interactive media applications through its transdisciplinary pedagogy. It offers a rare joint graduate degree between the visual arts and computer engineering departments.
During the 1970s, EVL hardware and software was used to generate the animation used in the first Star Wars movie, while in the late 80s the lab began focussing specifically on scientific visualisation, providing media tools for engineers and research scientists. More recently EVL activities have encompassed the production of virtual-reality tools and environments, such as the CAVE (Cave Automatic Virtual Environment) virtual-reality theatre (1992), and the ImmersaDesk virtual-reality work space. Producers associated with the EVL also showcase their innovations at a variety of academic, industry and electronic arts conferences.
Centre for Advanced Visual Studies Media Laboratory Cambridge, MA
cavs.mit.edu
www.media.mit.edu
Both of these centres date back to the turbulent era of "art and technology" collaborations of the 1960s. Kepes, the CAVS founder, had earlier brought the European Bauhaus tradition to Chicago; at CAVS a dynamic program was quickly established including an international group of artist and critical Fellows. This was shortly followed by a graduate degree in visual studies which counts among its alumni pioneers of virtual reality and interactive art.
The Media Laboratory grew out of the important research led by Negroponte on Computer-Aided design; known as the Architecture Machine Group, it built on the strong scientific base at MIT in computer graphic systems and artificial intelligence. Interaction between these two groups continued through the 1970s, but seems to have diminished as the Media Lab was conceived and eventually opened in the mid-1980s. The program of the Media Lab translates many of the cultural and technological "threads" of the IT paradigm into a coherent vision of a hyper-mediated techno-scape premised on breakthroughs in machine intelligence. From the start, artists were understood to play an important part in the wider Media Lab mix. As then-academic director Steven Benton explained during its early years, the new meta-discipline of "Media Arts and Science has a technical, perceptual, and aesthetic basis, but no-one here is solely an artist.. The Barry Vercoes, Tod Machovers, Muriel Coopers all are doing research with a technical base. We are not trying to be an art school... It's a new kind of research trying to be informed by aesthetics". [35] The status of artists at the lab has been controversial; this has proven to be a complex, sometimes acrimonious dispute, closely calibrated to how well artists themselves are able to accommodate the agendas of the Labs' mainly corporate sponsors. Stewart Brand, author of a quasi-official book on the MediaLab, has commented ,"The Lab was not there for the artists. The artists were there for the Lab. Their job was to supplement the scientists and engineers in three important ways: They were to be cognitive pioneers. They were to ensure that all demos were done with art - that is, presentational craft. And they were to keep things culturally innovative. Having real artists around was supposed to infect the place with quality, which it did." [36]
A new generation of researchers may be forging more integral fusions between the aesthetic, technical and perception than Brand states. Ishii's "Tangible Media" group designs 3-D and spatial interfaces that border on the kind of sensory environments often found in the work of the best media artists. Importantly, cultural traditions, such the abacus from his own Japanese upbringing, are considered in defining the affordances for effective human-machine communication. Maeda, a graphic artist, may be the first of a new, younger breed of artist-engineers. Carrying forward Cooper's Visible Language Workshop in a research program on "aesthetics and computation" Maeda's stated aim is the "true melding of the artistic sensibility with that of the engineer in a single person"[37]
2. Networks
Research networks
European Union’s I3 - Intelligent Information Networks (Esprit long-term research) ERENA eSCAPE
www.i3.org
escape.lancs.ac.uk/index.html
This program of the EU finances over a dozen multi-national, interdisciplinary research networks, organized around three themes: experimental school environments, inhabited information spaces, and connected community.[38] Designers of all types -- industrial, graphic, product -- play a central role in all these networks. In the present context, we note the programs of two networks which set out to work closely with the electronic and media art community as equal partners in their research.
The eSCAPE - electronic landscape - network addresses the difficult problem of inter-communication between virtual environments, particularly those using quick-maturing spatial and immersive interaction techniques. Since this is a field in which artists have been intensively active since the 1960s, the computer scientists directing the project sought to draw on this rich fund of past work for models and inspiration. Partners include the ZKM, GMD, and scientific partners in Sweden and England. A third component in the mix is ethnography, at two levels so far: first, studies of users in heavily mediated settings (traffic, ambulance control) to derive new design principles; second, studies of users of interactive art works (field work done at ZKM) to gain new understanding of the complex interplay between cognitive, physical, and sensory experience.
Interlinkages between these three components (technology development, art, social science) have, up till now, been nascent. According to the EC officials responsible, the coupling is justified in part because the aim in this research is to look very far forward, to better "understand how information and communication start making a difference when they're embedded in a real context". Thus, it's important to "forget about virtual environments and trying to fit people into some artificial world…how can we help people in their everyday environment, and integrate technology into this?"[39]
The Erena network shares much of the philosophy of eSCAPE, specifically addressing a set of "arenas" which traditionally have been considered cultural: performing arts, galleries and museums, broadcasting. One project combines a telecommunications firm (BT), computer scientists, a television producer, and performing artists to create an interlinked live broadcast + 3D internet system ("inhabited TV). Another pushes the limits of synthetic actors in computer animation.
Civil society focus
European Cultural Backbone
The concept of a continent-wide "social, cultural and technical infrastructure" of independent media centres, research facilities, newsletters, online forums developed through the 1990s, supported by the cultural program of Council of Europe. As Marleen Stikker explains the concept:
"Sustaining the public sphere is an essential factor in fostering an innovative European media culture. This means providing participatory public access to networks and media tools, and privileging public content, by developing the digital equivalent of public libraries and museums, as distinct from privately owned databases and networks….
For an effective exchange of expertise and training, an open, online communication environment is required. Other means of distributing information and knowledge, including publications, newsletters and workshops, should also be developed. Such facilities must cater to the multilingual reality of Europe through the provision of adequate software, design and translation. To be effective, culture as much as science requires its domains of primary research, which needs to be supported by appropriate environments and resources (e.g., independent research laboratories for media art)".[40]
The CD-ROM and web site Hybrid Media Lounge is a self-described "interactive visual representation of European Network Culture". The first four menu sections — hard data, soft data, context, and network — each provide a different representation of resources available, interests, and linkages between nodes.
Art production focus
ANAT - Australia Network for Art and Technology
www.anat.org.au
ANAT presents a clear model of a mainly virtual structure whose role is to "advocate, support and promote the arts and artists in the interaction between art, science and technology". Founded in 1985, and supported by Australia Council, it offers annual "summer schools" for working artists, support for international travel and exposure, critical dialogues, and funding for projects and residencies. The vision of its director is "not to build edifices to new media art practice… but rather in building mechanisms, where new media art practice is included in exhibition, performance, literature based practices." [41] ANAT has organized its flagship summer school around scientific topics of growing critical concern to artists, like biotechnology and artificial life. From this intensive several week session were born several projects by artists which now continue in scientific labs. Financial support for this subsequent phase, now taking place, comes from public sources concerned with the promotion of science awareness.
3. Projects and targetted funding schemes
Art-Science award schemes
Sci-ART - Wellcome Trust, Gulbenkian Foundation, NESTA - National Endowment for Science, Technology and the Arts. London, UK
www.wellcome.ac.uk
www.nesta.org.uk
The Wellcome Trust is one of the largest bio-medical research foundations. Hoping to widen public understanding of science, particularly biomedical, it launched a competitive scheme in 1997 to bring together the "often separated cultural spheres of science and art". The aim is to match professional artists with scientists, working on common projects that "grew out of a genuinely reciprocal…inspiration". Two rounds of awards have now been given, some 6 per year each averaging about $25,000 US. The varied formulas for collaboration in these pairings present a panorama of the dynamics of art-science cooperation: from the artist as a medical subject for a scientific group working on the relationship between "looking" and "reproducing"; to the whimsical creation of a new fashion line derived pictorially from an interpretation of the dynamics of embryonic development. [42]
The Gulbenkian Foundation, UK Branch, has run a granting program since 1997 called the "The Two Cultures - Arts and Science. Based on this experience, the Foundation is preparing a major publication about Science and the Arts -- what it calls, the first 'map of the world' of this vast territory, to appear in the Fall of 1999. Commenting on the findings, the foundation reports "Many people take the view initially that the creative processes in each discipline are fundamentally the same, but that is not what our current research reveals. Indeed, stepping from one 'planet' to the next takes some adjusting to and sometimes the views of each on the other (artists on science, scientists on art) are curiously out of kilter. The book should reveal many new opportunities for artists but also explain to scientists the value of seeing the world from the peculiar tangential viewpoint of the artist". [43]
In 1999, a consortium was established between the Wellcome and Gulbenkian Foundations, plus the Arts Council of England and the newly formed National Endowment for Science, Technology and the Arts (NESTA). A new program is planned in which NESTA provides funding for "follow-on" stages of projects begun through science-art collaborations. Details are not announced, but based on the published charter of NESTA, it is likely to include investment for commercialization of intellectual property, touring of exhibitions or performances, and publication.
Hybrid Workspace - the Temporary Media Lab Model
www.medialounge.net See CD-ROM : Hybrid Media Lounge
Hybrid Workspace was a summer-long project in 1997 produced by the Documenta world art exhibition in Kassel, Germany. It was conceived as a communication experiment, highlighting the creative process and untapped potential of digital media, more than the display of fixed aesthetic works. This entailed setting up a temporary media space, with equipment to produce a range of multimedia, web-broadcast, pamphlets, television and radio programs. The logic of its planners also identified the redundancy of many current conferences and professional meetings, particularly where proceedings are instantly available over the internet. Face-to-face meeting with people in such settings can rarely progress to the stage of detailed, practical exchange. The idea of setting up a hybrid workspace was to make possible a series of topical work sessions, each led by a different group/collective. Fifteen such groups consisting of artists, activists, critics and their guests presented their work, produced new concepts and started campaigns that developed and continued long after the gathering. This CD-ROM archive documents the rich and diverse results of the Hybrid Workspace.
The model has been considered a useful organizational innovation, and a follow-on project is now in preparation in Helsinki for the newly opened national museum's digital media centre. This model presents an interesting approach toward knitting together, in a production context, the interests of local groups, new entrants to the field of media-production, and a diverse range of international/visiting theorists and practitioners. We return to its potential for longer term, systemic impact in IT capacity development in a later section.
This table displays the name of each institution, network or project surveyed, against columns defined as follows
1 date founded
2 mode of operation: S = on Site, i.e., at the main studio-lab location
D = Distributed, i.e., multiple sites cooperating
T = Touring, i.e., works often are co-commissioned and toured to other centres
3. typical manner of teamwork (pairings of artist/scientist, small teams, common platform)
4, "lead" tendency. Left pointing means, mainly "art-driven"
Right pointingmeans, mainly "science-technology driven"
The table is indicative only, meant to provide an overview of the very distinctive models presented by the cases selected.
Institutions -cultural | 1 Date | 2 Mode | 3 Style of teamwork | 4 ‘Art ----‘S&T’ |
IRCAM | 1977 | S | Pairings (composer/ programmer) | |
ZKM | 1991 | S + T | Small teams, scientific cooperation | |
Society for Old and New Media | 1994 | S + D | Small teams, extended collaborators | |
Banff Centre, Art & | 1988 | S + T | Common platform for 8 artist projects | |
Ars Electronica | 1979 | S + T | Small teams/ | |
Institutions - firms |
|
|
|
|
Art + Com | 1988 | S + D | Core staff; sponsored projects | |
XeroxParc PAIR | 1993 | S | Pairings | |
NTT ICC | 1990 | S + T | Small teams, | |
ATR MICC | 1995 | S | Individual or small teams | |
Interval | 1992 | S + D | Individual + teams | |
Canon ArtLab | 1991 | S + T | Small teams | |
Institutions - University/Public |
|
|
|
|
GMD - MARS | 1992 | S | Teams; some large projects | |
EVL Chicago | 1971 | S | Individual & pairings | |
CAVS-MIT | 1967 | S | Individual, teams less often | |
MediaLab-MIT | 1984 | S | Academic supervised industry funded teams | |
Networks |
|
|
|
|
I3 Escape, Erena | 1997 | D + S | Distributed large and small teams | |
EuroCultural Backbone | 1998 | D | Small teams, network learning | |
ANAT | 1989 | D | Workshops, pairings, | |
Projects |
|
|
|
|
SCI-ART | 1997 | D | Pairings | |
Hybrid Workspace | 1997 | S + D | Collective learning | |
Instruments and the Imagination
One fruitful way to think historically about the kind of techno-cultural creativity manifest in the studio atories just surveyed is to recall the role that instruments have long played on the margins between science, art, magic, entertainment, and philosophy. Citing science historian Thomas Hankins: "To understand actual scientific practice, we have to understand instruments, not only how they are constructed, but also how they are used, and more important, how they are regarded". Hankins does just this in a book about curious, mostly forgotten instruments from the 18th and 19th centuries -- ocular harpsichords, animal automata, stereoscopes and magic lanterns -- which oscillate between demonstration, entertainment, magic, and measurement. The crucial point that Hankins makes is that even such "objective" devices as the telescope, microscope or air pump were the subjects of controversy in their time; just as the photograph later was in the 19th century, and today, digital processing of images makes the veracity of any picture questionable. "We choose", Hankins writes, "how to represent the natural world to ourselves"[44]. Instruments are a way of "questioning nature", a "language of inquiry"; and the historical examples retold with verve in Hankins' book suggest a way of considering today's investigators -- artists and scientists -- in the spirit of those "natural philosophers", whose "instruments move easily between natural science and other human activity".
Media technology as boundary object
A striking set of examples where today's investigators specifically designate technology as a shared medium of joint exploration is available from the Xerox artist-scientist pairings. Each case indicates the medium taken as point of departure, and the contrasting way in which they were regarded and employed by scientist and artist respectively:
- scanning tunneling microscopes (STM): as a sub-atomic recording device; used by musicians to convert atomic bumps into sound patterns
- images as glyphs in which technical data is embedded; images as iconography carrying metaphorical and linguistic layers
- web-site for social-action art project with mental patients; web-site for corporate communications.
The PARC commentators refer to the medium (or "experimental document" in their corporate jargon) as a common language, but a more apt metaphor is perhaps that of the boundary object. This is a term introduced by sociologist of science S. Leigh Star, describing "scientific objects which both inhabit several intersecting social worlds and satisfy the informational requirements for both of them" [45]. Through a radically opposed dialogue about the STM, one PARC researcher recounts, a new line of questioning grew about how the senses are extended through instruments: "Are there untapped sensory channels for interacting with the unseeable which enable powerful conceptualization?"[31]
Similar conceptualizations of the sensorium characterized the collaborations during the 1960s between AT&T Bell Labs researchers in vision and perception, and the varied artists -- musicians and filmmakers, mainly -- who worked with them. In the words of vision researcher Bela Julesz: "Visual perception is historically a common area for both the artist and scientist, a common intersection where there is no gap or artificial bridge. The same kinds of things can be artistic or scientific; the only difference is the motivation..the artist is searching for an artistic truth, an intimate truth he wants to convey, and I am searching for scientific truth, which is testable and very defined. "[17] The activities of these teams tended to focus around the digital computer, which was constructed as a tool for understanding human perception, and at the same time, as a potential new medium for artistic expression. Bell researchers tended, in the main, to locate the artistic added value in the unique ways in which artists could train themselves to perceive, and thereby, shape, images or sounds. John Pierce, director of the Communication Sciences Division, acknowledged that in seeking to program computers to produce intelligible speech, "one of the most important human faculties is that of being able to judge qualities even when we cannot measure them. Here the ear of the trained musician may be as valuable as the digital computer."
Today, similar cases abound; entire labs, like the Chicago Electronic Visualization Laboratory, operate on the basis of the heterogeneous shaping of a common medium which can prod new disciplinary insights. In some cases, the "uncertainty" of the object's identity has declined over time, becoming, much as Hankins described some of the pre-scientific instruments of "natural magic", more or less stabilized at one or another of its poles of attraction. Such, it could be argued, is the case of scientific visualization at the EVL: to the extent that the aesthetic shaping of the immersive simulations developed there is confined to the usual "non-essential" parameters of color, form, or texture, the object has settled at the scientific side of the margin.
As we have previously seen, one area where the boundaries today are notably blurred is the field of "artificial life", attracting artists with interests and background in biology and computation to create evolutionary digital systems. Broadly speaking, ideas from genetics have begun to shape the way many computational artists conceive the inter-relationships between their formal materials. In the simplest manner, style can be characterized in terms of traits, and as objects -- drawings, or melodies, for example -- replicate, they change form according to programmed rules of reproduction and mutation. Artificial life extends evolutionary metaphors even further, in the work of the team Christa Sommerer and Laurent Mignonneau, who develop artificial-life installation works as researchers at ATR corporation in Tokyo. They build imaginary eco-systems which evolve and mutate as artificial virtual worlds, but are able also to react to observers' gestures when provided.
A scientific colleague at the same lab, computational biologist Tom Ray, illustrates well the instability of borders between artificial-life artists and scientists, when he calls for a "new aesthetics", based on "free evolution in the digital medium". Interestingly, he argues this evolution need not be "inherently visual or auditory in nature, …and would not be recognized as conventional artistic creations". He seems to be describing a kind of computational beauty inherent to the digital medium, with "richness comparable to what [evolution] has expressed in the organic medium".[46]
The Musical Instrument as Interface Metaphor
There is one special case of the projection of human imagination through skilled instrumental performance: musical instruments have long served as metaphor and analytical model for philosophers (think of Heraclitus or Confucius), mathematicians (Pythagoras or Galileo), and in our own time, computer scientists and interface designers.
From the earliest years of personal computing, a controversy has simmered about the trade-offs in designing systems that are easy-to-use but quite general in their scope, or more challenging to master, but with greater depth and power. Alan Kay, credited with conceiving the personal computer as a portable "Dynabook" (and later helping Xerox to implement one of the first "personal workstations"), was also influential in promoting the notion of computer use as a medium for creative thought. In their 1977 paper on "personal dynamic media", Kay and Goldberg [47] explained their design goals as wanting to combine both the broad, standard-model usability of inflexibly mass-produced items like cars and TV sets, with the plastic, moldable, open-endedness of tangible media like paper or clay. The key, Kay argued in 1977, is learning to use a high-level programming language, inspired by Seymour Papert's artistic approach towards teaching children to program.
In the meantime, the trajectory that actually became locked-in once personal computing took off in the 1980s is based not a style of programming, but rather a graphical means of manipulating and selecting surface icons -- the ubiquitous "graphical user interface". Far from Kay's subtle, even dialectical conception of fluency within a dynamic medium, most computer use could be characterized as brittle, fault-intolerant, and closely coupled with proprietary software "solutions" -- packaged applications -- that offer only minimal room for user-programmed extensions or variation.
In a forthcoming book about Douglas Engelbart and his Palo Alto research group, Bardini sharply pinpoints the actual losses entailed in the "lock-in" of the PC in its present form. [48] Early researchers, like Engelbart during the 1960s, thought of the user as acquiring progressively more powerful kinesthetic and motor skills; in effect, operating interfaces with greater instrumental virtuosity to keep pace with the mental scope and expressive boundaries set by the user's intellect. The idea of learning to "play" a piano-like key-set, in order to navigate conceptually through information space, may seem like science fiction; but this is what Engelbart himself built and mastered, and arguably, its originality is such that it deserves to be considered a more profound interaction paradigm than the "mouse" with which he is actually credited.
Alan Kay, meanwhile, who is himself a skilled musician, has tended to be ambivalent about how literally to base human computer interaction on a metaphor of musicianship. Younger theorists already describe "interface" as the characteristic art form of the 21st century, with much the same kind of historical determinism driving their arguments that pertained during Henri Bergson's time when cinema was widely welcomed as the 20th century's defining art form[49]. To have a glimpse today at what this prediction might look like in 10 to 20 years, it is likely more suggestive to extrapolate from the more speculative, 3D or installation-based creation of current artists and design engineers, than to look at the incremental variations coming from software vendors. Much of this work begins with something like a musical notion of the machine interface, using bodily motions, breathing, movement, gesture to shape the art-work's responses in a way that is, at least in principle, amenable to personal nuance.
Turning back towards what might be dubbed the more "cognitive" pole of the mind-body continuum, it is still worth recalling how Kay and Goldberg had envisaged the system design of a "dynamic personal medium" two decades ago:
"Our design strategy, then, divides the problem. The burden of system design and specification is transferred to the user. This approach will only work if we do a very careful and comprehensive job of providing a general medium of communication which will allow ordinary users to casually and easily describe their desires for a specific tool. We must also provide enough already-written general tools so that a user need not start from scratch for most things she or he may wish to do".[47]
Creative Users in IT Design and Diffusion
"User innovation" has become a commonplace term of late, indicating the importance of the user (customer, client) as a partner in the innovation process. Von Hippel explains the benefits of turning users into designers as "faster and better and cheaper learning by using" [50]. Advanced firms, he argues, are changing the very economics of design, by investing in software-based application-specific toolkits that "transfer a capability to design truly novel customized products and services to users". His examples come from manufacturing (custom-designed circuits and software), and he stresses that the design tool-kit reduces the iterations and flow back and forth between users and designers.
Consider these points in a non-manufacturing case now, the software used by artists to make movies, music, or multimedia -- all dynamic, time-based expressions which technically challenge the computer's capacity to synchronize and co-ordinate various kinds of audio-visual representations. Software applications have been widely available for some 15-20 years that permit artists to create more-or-less independently from the system programmers on whom they formerly depended if they wanted to use computers without learning to program. As a class, software for animation or music abstracts [51] some aspects of the craft of movie-making or composition, mechanizing them into modules much like the "already-written" generic tools Alan Kay thought all users would likely call on in his SmallTalk system. But what about support for individual expressiveness, corresponding to the distinctive traits of an artists' style or signature? Recalling Simon Penny's present-day concern about artists' practices being re-shaped to conform to the restrictions of their computer-based tools, it is evident that the ability to design novel capacities beyond the base mechanisms embedded in common applications remains elusive.
As has been shown by the successive diffusion of desktop publishing, image processing, music composing, and now multimedia/animation software, the distinctive appeal of such programs lies in the way they facilitate for new classes of users a degree of creativity that formerly required a specialists' craft training. The issue of boosting the general user's media fluency is of less interest to this discussion, however, than to look in greater depth at the way in which new types of creative possibilities get embedded in software in the first place.
To do this, we will here present a précis of the results of part of a full case study about the emergence of the creative user of computer animation. In the mid-1960s, when computers were completely intractable to all but engineers, the very idea of applying digital calculation to the intensely artisanal production of animated film was by no means obvious. A host of contrasting, often conflicting interests existed from the start of computer graphics, and the earliest encounters between artists, system designers and programmers reveal a fascinating, and in some ways instructive story about the conditions under which creative users enter into productive relationships with designers. Another way of saying this is that between the 1960s and mid-1980s, the computer itself was constructed as a medium for making movies, within a wide and sometimes contested zone of interpretive flexibility, to use the phrase of Dutch sociologist of technology W. Bijker [52].
Artists as Lead Users of Early Computer Animation Systems
The base technologies for interactive computer graphics were largely developed in U.S. military research programs, often closely aligned with key universities like MIT, and supported by the Pentagon's aggressive funding of fundamental information processing research. By the mid-1960s, development of civilian applications was underway as well, notably in aviation, architecture, scientific communication. Many of the same organizations also experimented with artists as lead users of early mainframe animation systems. Broadly speaking, two design approaches towards computer animation were pursued: picture-driven, and language-based. The latter specified visual images and their continuity using traditional textual computer programming languages; they depended on the ability to describe visual phenomena mathematically. Picture-driven approaches aimed to assist aspects of the hand-crafted art of animation, permitting the non-specialist artist to draw and ink the cels serving as key-frames, using the computer to coordinate the images and calculate the transitions between them (in-between) images. [53]
The study looks at similarities and differences between the way in which this field developed in various parts of North America; in particular, close attention is being given to the conditions of innovation which led to an unusually dense concentration of firms, researchers, and electronic media artists in Canada. Beginning in the mid-1960s, researchers at the National Research Council (NRC) and the National Film Board (NFB) -- both federally-funded agencies -- began to investigate the potential for using computers in film-making. The approaches taken, in each case, differ markedly from those of the American research sites. In both cases, the Canadian investigators were scientific and technical followers, not leaders, and they had very restricted budgets for equipment and personnel. They began their research by intensively studying everything the Americans had done to date.
To start with, the NRC researchers chose film-making as an application domain through which to study the problems of the man-machine interface. Besides computer animation, they also began an equally important program in computer-assisted music composition. Their goal was general understanding, ultimately to better support the use of interactive computing in science and engineering. But it was by no means irrelevant to their choice that the NRC was already a kind of studio-laboratory, supporting in the same Radio and Electrical Engineering department the groundbreaking research of a physicist-cum-composer on electronic musical instruments. By modeling the user as a creative artist, an original outlook resulted which at the time of its formulation in 1969 was notably different from the U.S. corporate or university labs [54]:
"…Up to this point, it has been assumed that the best possible way to design the computer would be to make it transparent. That is to make it look to the user as though it were not even present, so whatever idea occurred to him, it could be rapidly formed into a final creation. This is not necessarily true"
Constraints, argued researcher Ken Pulfer, are crucial to the creative process, giving examples such as conventions for drawing in architecture, or scales and notational conventions in music. By supporting the use of such conventions, the user is given a more meaningful starting point than the abstract 'blank slate' of total generality.
"…Most computer languages now available ...are unsatisfactory either because they are mathematically oriented, or because they result in cumbersome and slow programs. As a result we are usually left with the situation where an artist-programmer team is formed, the artist uses the system without having intimate control over the functions of the blocks he uses, and the programmer builds blocks without fully appreciating the needs of the artists."
Pulfer and his team chose therefore to develop a system in which:
"at no time [was] it necessary for the user to learn how to program the computer, or in fact even to know how to operate it other than through making some choices from names presented to him on the screen... he can proceed to learn the 'language' by trial and error."
Crucial to the implementation of this design was the just-published research of the first graphical user interface published in 1968 by Douglas Engelbart [55] -- interestingly, as a system for "augmenting the human intellect". The NRC team considered the results produced by the U.S. "artist-programmer" teams to lack validity for their purposes; for this reason, they chose to work only with professional filmmakers (or composers) who could teach them something about movie-making (or music composition).
Technical Innovation at the Canadian National Film Board
The National Film Board of Canada, founded in 1939 as the Government Film Office, was home to a world famous tradition in documentary film and experimental animation. A strong technical research and cooperation department maintained a watch on the global development of motion picture technology, and this group too had a well-established tradition of technical innovation. In 1951, under the direction of the award-winning animator Norman McLaren, it had produced the first stereoscopic animated film, presented to stunned crowds at the Festival of Britain; during the mid-1960s, another team of filmmakers and technicians developed a unique multi-screen projection and camera system for the Labyrinth pavilion which was soon thereafter transformed and commercialized as IMAX wide-screen format. An electrical engineer who had previously worked in the telecommunications industry on the application of the computer to digital signal switching brought a disciplined bench-marking approach to the analysis of the computer as a tool for motion pictures. This quickly produced an intensive learning program in which the NFB had received visits from and in most cases, pursued in-depth dialogues with all of the key U.S. players; it also conducted tests using borrowed equipment.
Within the strong technical culture the Film Board, there was strong resistance to "solutions" from outside experts being applied to creative problems. (Indeed, an early proposal from AT&T Bell Labs to "solve" an animation need for special effects was flatly refused.) This culture was strongly shaped by the model of McLaren, whose creative vision was sharply opposed to the assembly-line factory approach towards commercial animation typified by Disney Studios. He summarized his method, in 1948, as [56]:
- "attempting to keep at a minimum the technical mechanism standing between my conception and the finished work.
- handling personally the mechanisms that do remain, in as intimate way as a painter her painting, or a violinist his violin.
- making the very limitations of these mechanisms, when brought in touch with the theme, the growing point for visual ideas.
- making sure of a chance for improvisation at the moment of shooting or drawing."
With this disposition towards the close interpenetration of idea and technique, the film board animators of the 1960s looked with some skepticism at the results of the art + technology experiments coming from such well-resourced U.S. centres as MIT, Bell Labs, IBM. The computer was imagined richly as a creative, administrative, and mechanical-control resource, but always in terms of a very concrete set of ongoing work practices.
Space does not here permit a comparable outline of these American studio laboratories. Suffice it to say that in these settings, the computer was mainly a scientific instrument, an aid to studying perception, or a modeling tool for the production of simulations. Links with artists tended to be far more "experimental", and it seems that where aesthetic considerations were important, these tended to equate artistic creation with the discovery of new forms of expression (rather than supporting a more known range of what users might already want to create). Only in a few cases did the scientific investigator think reflectively about what the user brought to the computer as a potential contributor to system design.
It must be remembered that computing in the late 1960s, was formidably expensive, and software development a labor-intensive enterprise beyond nearly all non-technical users. After developing an internal knowledge base about the technical as well as aesthetic possibilities of computer animation, the NFB decided to look outside for compatible partners with which it could enter the field through "real" production, not just technical tests. This was arranged to take place with the National Research Council's system, which by 1970 had developed further by implementing a system for keyframe interpolation, the first which allowed the artist to communicate graphically with the computer. [57] (This accomplishment was recognized, some 25 years later, with a Scientific and Technical Academy Award).
The NFB rigorously evaluated the NRC system before the production period began; a series of improvements were made, all geared towards making it conform more closely to the mental models of a creative animator. These exchanges were documented, and a pattern of mutual accommodation developed between the NRC researchers (a team of three) and the NFB's French Animation studio. A set of criteria outlined what kind of film the NFB should aim to make. It should be one suited to systems' quite limited capacities, but also, it should be chosen to push the medium enough to yield "generalizable" results applicable beyond the single instance.
The NFB producers found a suitable candidate in Peter Foldes, who had previously proposed a full animation treatment of a scenario that required extensive use of metamorphosis between shapes. The artist would spend a few weeks at a time working with the system in Ottawa; in the intervening periods, improvements were made based on what had been learned in production. The film that was released in 1973, Hunger, was recognized immediately as an artistically convincing character animation; it was nominated for an Academy Award and won numerous festival prizes.
The accomplishment of Hunger in matching an artist's vision to the still very intractable computer of the day can be interpreted in a number of ways. For the present purposes, it will suffice to note that the technique of linear keyframe interpolation was still far too primitive and mechanical to be used for what one critic has called the "anthropomorphic" style of the big-budget feature animation studios like Disney. While it promised to save costs by automating the intensive human labor of the artist drawing the intermediate frames, given its technical awkwardness, it could only be put to creative use by an artist willing to shape his or her vision to its still rather mechanical constraints. Indeed, this "machinic" interpolation, which in other contexts would have been a defect, gave the film its expressive signature, and the impact of the film proved to be far reaching. It proved that convincing artistic films could be produced by computer, at a time when Hollywood was only using it for title sequences or special effects. As well, it had a major influence in the technical community, attracting, especially in Canada, young people to the field of computer engineering precisely to further the possibilities of artistic animation.
Summarizing the lesson of this early episode of productive collaboration between two studio-laboratories, both were small, under-resourced, and unable to make further progress without the contributions of the others. None of the researchers identified strongly with (nor necessarily even knew) the way things "ought to be done" in computing. From the outset, both had something of a hybrid character -- the NFB, a cultural organization with a strong technical research group, skilled at absorbing and re-purposing new techniques; the NRC, a government research institute with an intellectual work culture friendly to artistic practice. Many of the individuals were cognitively open-minded and sympathetic to a an approach toward creativity as:
"…a process involv[ing]trial and error, with the creator modifying the mental image of his creation as it takes place. He interacts with his creative medium...in a conversational way, learning the 'language' in which he can express himself as he goes along" [54]
This "heuristic" approach to computing was poles apart from the comparable, extremely influential theorization of computer-supported creativity by Negroponte in terms of artificial intelligence. [58]
Foldes, whom we can consider the "lead user" of the NRC system, realized how unusual was his opportunity, when he later commented:
Disons que l'ordinateur américain a des yeux et l'ordinateur canadien une main. Les Amérciains ont des impératifs commerciaux, un souci de rentabilité. Les Canadiens du CNRS sont beaucoup plus désintéressés et subventionnent la recherche pure.[59]
One could say the American computer has eyes, and the Canadian computer, a hand. The Americans have commercial pressures, a concern for profitability. The Canadians at the NRC are much more disinterested, and finance pure research.
Constructing Canadian Animation Culture
In fact, the long-term outcomes of the early Canadian scenes of innovation in computer graphics and animation proved to be economically significant. Nearly all the successful producers of animation software, whose products are used around the world in the animation, multimedia, and CAD industries, were descended from or assisted by the people, ideas, systems jointly formed at NFB and NRC. An ongoing study traces the diffusion of ideas, innovation, systems, and skills up to the foundation of these companies.
Early results support an interpretation that the Canadian innovators shared a linked set of values about the interplay between creators and engineers, or what art historian Caroline Jones has called the "machine in the studio". It is tempting to think of these values in terms of what Paul Edwards, writing about computers and the "politics of discourse" during the Cold war, has called "the closed world" discourse. This term for Edwards signifies a:
"…linked ensemble of metaphors, practices, institutions and technologies, elaborated over time according to an internal logic and organized around the Foucaultian support of the electronic digital computer". [60]
Canada is widely known as a communication-saturated state, and the homeland of Marshall McLuhan. As political scientist Arthur Kroker puts it : "Canada's principal contribution to North American thought consists of a highly original, comprehensive, and eloquent discourse on technology"[61] One aspect of this discourse, previously mentioned, was McLuhan's aphoristic, elliptical way of thinking about new media of communication as art forms. Initially, new media are invariably understood in terms of old (the message is the old medium); the new medium is only "freed" from its reliance on the old through creative -- artistic -- experimentation (the new medium is the message). McLuhan's deterministic way of compelling media along their "destiny" toward "maximal" realization can be maddening to some, but it should not mask his basic insight about how communication media reveal their possibilities through use. Can this discourse about media innovation be linked, as Edwards does convincingly for the computer in relation to Cold-War politics, to the "heuristic" system development approach taken by NRC and NFB innovators?
What can be said at this stage with certainty is that different cultural constructions of the computer as a creative medium help to shape different development paths. Canada's "success story" in computer animation shows how niche strengths in high-tech industry can grow in diverse settings, and that the way user knowledge is expressed and cultivated with and through technical communities can play a key role in seeding and nurturing that growth.
The preceding section demonstrated how creative users linked to the innovation process over a several decade period contributed not only to cultural enrichment in the uses of technology, but also to the growth of an important sector of a regional information economy. From the standpoint of the worsening inequities between the information haves and have-nots, showing how a strong cultural informatics capacity grew up at the figurative doorstep of Hollywood might not at first glance seem all that pertinent. However, there is also a long tradition of analyzing Canada as a "borderline case" -- "the "hidden ground for the big powers" , as McLuhan characteristically quipped[62], with elements of both "first" and "third" world countries.
Recasting the Canadian case slightly, it can be seen as one pathway to the building of local cultural distinctiveness in a situated set of informational practices. "Situated", in this context, leads us to consider the challenge of cultural diversity in the age of globalization. Much culturalist thought on this topic is still stuck in a "mass-media" mindset, like post-colonial theorist Edward Said who has railed:
"The threat to independence in the late twentieth century from the new electronics could be greater than was colonialism. The new media have the power to penetrate more deeply into a 'receiving' culture than any previous manifestation of Western technology." (quoted in [63])
To be sure, corporate concentration in the media and entertainment fields continues its rampant increase. As the Economist magazine observed tartly: "What will the digital revolution do to the entertainment industry's emerging global oligopoly? Probably boost it"[64] .
Said obviously overlooks the myriad ways new media have been used by opposition groups, NGOs, identity-formations of all sorts; it is striking indeed that he appears to grant no power to the "backchannels" available through digital media. This movement goes alongside the fusion of internet, multimedia and computer games with "the entertainment economy", and so far, it is anyone's guess the degree to which pessimistic Frankfurt-School type predictions of imperialist cultural hegemony will prevail.
Cultural policy makers have not, for the most part, helped matters much by their willingness to concede a limited role for culture as compensation against the loss of national identity through economic globalization. This lack of vision and advocacy often gets translated into a heritage-based conception of identity, grounded in the irreproachable values of restoration, preservation, and conservation. For those approaching cultural development from a more active technological perspective, policies emphasizing heritage priorities channel inordinate resources towards information projects concerned with inventory management, data retrieval, and classification standards. Unquestionably, the librarian's, curator's, or conservator's professional skills are crucial to delivering effective access to cultural heritage. But these objectives need not be in conflict with broader issues of creativity and innovation in the cultural use of digital media. As Stuart Hall has said, "identity is not in the past to be found, but in the future to be constructed" (quoted in [65]).
In a recent book about information technology for sustainable development, Robin Mansell stresses the role of information cultures in shaping "people's ideas about how they should be concerned with media, technologies, the advantages/or not of information access, tele-learning, telework" [66]. Drawing on the work of Ursula Mier-Rabler, an Austrian scholar, she lists four such cultures, each followed here by a sketch of the values implied by each label:
1. Protestant-enlightened information culture (U.S.A)
- competitiveness, transparency, ICT's a basic instrument of economic action
2. Social democratic-liberal information culture (Scandinavia)
- enhanced knowledge about civil society is beneficial to individuals, and ICT
central to political emancipation
3. Catholic-feudal information culture
- information is hierarchically organized, and transmitted from the "info-rich" to others; no consensus on individual information rights
4. Centralist-socialist information culture (former Eastern bloc)
- precise information gathered and fed from the periphery to central organizations
As Mansell notes, none of these is a pure form. How they are configured is a factor in determining "whether there will be a demand for access to information via advanced Information and Communication Technologies".
As we have been developing in different ways throughout this report, another important information culture might be identified, defined less in terms of political or ideological alignments, than its tactical grasp of the pragmatics of media. We will call this, partly tongue-in-cheek, the "art-hacker" information culture. This culture rejects any rigid separation of form and content; communication is never passive reception, but invariably entails some more or less actively expressed response. Response is not confined, furthermore, to the pre-figured options that might shape a system. If the occasion demands it, new extensions can always be added to make it possible to think "outside the box" or "jam the channels". A certain parodistic reflexivity prevails in this ethos, as the adbusters or culture jammers play with and undermine the communication flows of their opponents.
On a more theoretical level, this information culture has a deep suspicion of what Berkeley linguist George Lakoff identifies as "the conduit metaphor", a deeply engrained linguistic habit in which "ideas are taken as objects and thought is taken as the manipulation of objects [and] that memory is storage…Ideas are objects that you can put into words, so that language is a container for ideas, and you send ideas in words over a conduit, a channel of communication to someone else who extracts the ideas from the words".[67] The conduit metaphor for communication, like the "linear model" of innovation previously critiqued, is deficient because of its inability to cope with complex systems. The metaphor is widespread and pervasive, contributing to the common way in which "content" or "content services" are seen to be made of separate stuff from software and hardware, to which people are given "access" or not, through more or less transparent or affordable interfaces or channels.
The art-hacker culture pervades the practices of the various studio-laboratories already discussed; here we wish to consider the way it drives a particular approach to socio-technical development. Two main aspects typify this approach: first, a preference for the "open source" philosophy of development. This ethos, which stems in part from the earliest hacker culture of the 1960s, has now acquired serious corporate respectability as a credible alternative to proprietary, hierarchically managed development of software and hardware systems. In place of hierarchy, many artisans contribute components within open, standards-defined frameworks, freely sharing improvements and benefiting jointly from the collective rising tide. The second aspect of this culture is a style of heterogeneous teamwork, typically assembled around temporary, socially-specific projects or campaigns. Geert Lovink, the Dutch media theorist and co-organizer of Hybrid Workspace at Dokumenta, formulates a framework for cooperative action as:
" … a radical pragmatic coalition of intellectual and artistic forces-- forces that, so far, have been working in different directions. It is time for dialogue and confrontation between media activists, electronic artists, cultural studies scholars, designers and programmers, media theorists, journalists, those who work in fashion, pop culture, visual arts, theatre and architecture."[63]
The tactical media orientation uses all modes of media, old and new, and in particular looks for ways of combining the virtual world of digital media with community based media practices. Lovink and colleagues have been closely aligned as technical and creative advisors to the Soros foundation, in setting up internet access centres, media art research labs, and training in the former Eastern bloc. They now are turning their attention to Asia, developing links in China, India, Indonesia.
An apparent spinoff of these developing links between the Euro-socialist-art-hacker information culture and the developing world is the recently announced Sarai-- the first independent media culture centre in India. Sarai is a joint initiative of the Centre for the Study of Developing Societies, Delhi, Raqs Media Collective, Delhi, in collaboration with The Society for Old & New Media, the Waag, Amsterdam. Sarai is conceived:
1. As a public access driven, de-centralized constellation of a variety of
research, creative practice and education initiatives in all aspects of
the new and old media landscape.
2. As an alive and integral part of the new urban culture and emerging
civic consciousness of the city of Delhi/New Delhi. As a major player in
the shaping of the urban culture and political imagination of the city of
Delhi/New Delhi in the future.
3. As a place where young and old people, academics, scholars, activists,
technicians and artists can interact amongst themselves and with others
through old and new media, through a variety of programs that are
designed primarily to be low -cost or no-cost. This includes, terminals
for free public Internet access, ISP services, offline/dial up
connectivity for those who cannot afford personal internet accounts,
publication, outreach and education programs and a variety of open
public events.
4. As a hub of networking amongst new/old media activists, a centre for
creating and exhibiting original work and as a clearing house for
innovative ideas in the South Asian/Asian region.
5. As an equal partner of new media initiatives at an international level,
and as a contributor to the content of emerging/new media cultures across
the world.[68]
Sarai is still in the earliest stages of establishment. As a model, it suggests a possible structural approach towards wider development of active media and information capabilities. The stress on local self-direction, combined with globally sophisticated cultural partnerships, bodes well for its future. Some possible pitfalls can be anticipated: too heavy reliance, for example, on what worked well for the European partner. It is likely, for instance, that training programmers to think about creative users, or artists how to program, may require a completely different approach in the Indian context, than has worked in Western or Eastern Europe.
Cultural Critique, Reflexivity and Innovation
In the main, humanists have had considerably less to do with the kind of co-operative development of technologies undertaken between artists, engineers and scientists. One thoughtful commentator has summed up the usual interests of humanists in information technology as follows:
- "Computation becomes the object of humanities research: the history of computation, the sociology of computer use, cultural criticism of Artificial Life
- Computational tools are used for humanistic projects. Humanists compose with word processors, send each other email, read the latest articles over the Web.
- Computational artifacts become essential research tools; automatic text analysis is used to support literary criticism, scholarly papers appear in hypertext, collaborative writing environments are used to co-write texts.
- In conjunction with the adoption of computational tools, computational concepts are borrowed and adapted to humanist projects: chaos theory as a method of literary analysis, the cyborg as a model of subjectivity, the robot historian as first-person perspective.[69]"
The author of this passage, Phoebe Sengers, is a rare case of a computer scientist with equal background in cultural theory [70]. Her own original contribution is a widened conception of what she terms "cultural informatics",
"… a practice of technical development that includes a deep understanding of the relationship between computer science research and broader culture. This means understanding computing as a historical, cultural phenomenon, including, for example, analysis of metaphors that shape technical approaches, discovering prejudices in the Heideggerian sense that cause us to look at problems in one way to the exclusion of others, finding unconsciously held philosophical difficulties that leak their way into technical problems. These insights are used as a basis to change underlying metaphors, prejudices, philosophy, resulting in changes in technology. Cultural informatics integrates a broad humanist perspective with concrete interventions in technology and technical practices."
As a term in English, "informatics" is preferred by some scholars to designate the disciplines usually called "computer science or engineering". The preference is not incidental. Nor is it without adherents from the computer science community too, and for similar reasons. Yale professor David Gelernter has called for a complete re-thinking of the training of "computer people", though not emphasizing cultural theory but an in-depth knowledge of history of art, design and aesthetics. "Software programming should be taught in studios, like art", Gelernter writes [71]. Far less stress should be placed on correctness, and more on elegance.
What Gelernter is pleading for is a higher standard of design in digital media, a balance of form and function that goes far beyond the usual "requirements-based" conception of user-centred design. To convey that extra measure of aptness, of conviviality past mere usability, elegance accounts only for what might be seen as the "surface design" elements. Taking seriously Sengers' proposal to consider computing as a humanist discipline actually pushes at the intersections between deep system-level design, philosophy, and social science. It is hardly surprising that this agenda is, so far, little understood in the academy.
At the Banff Centre's Art and Virtual Environments project (1991-94), a deliberate plan was made to precede a period of active technology-art development with a formative symposium organized to critically examine the concept of virtuality. This was carried out in a 10 week residency, involving not only artists and technology developers, but philosophers, cultural theorists, art historians. Virtuality here is understood:
"... as an expression of social discourses that are already in place. One of the intentions of the residency is to address the broader context of socio-cultural shifts that are both the cause and symptom of technological changes."[72]
The goal was to develop a set of alternative conceptions -- metaphors, scenarios, speculative designs -- that could inform the development team through the actual implementation phase. In fact, few linkages were made at so functional a level. The actual experience revealed the very wide gaps separating the world-views of critical theorists and those of engineers and programmers (much less so, most of the artists). As noted by one of the participants self identified as "theorist":
"While the majority of artists appear to have been theoretically and practically ill-equipped to deal with this new technology at the level of its technical organization, those involved in developing its hardware and software are equally ill-equipped to deal with its social and cultural dimensions as well as its political implications."
Yet, as was proved in the subsequent implementation phase, the artist-developer teams were eminently capable of developing, at a project level, cooperative strategies sufficient to produce what one commentator has since termed "projects that would permanently extend the tools we have for seeing and hearing"[73]. But what remained under-realized in this project was precisely the kind of conscious integration of what Sengers called "humanist perspective" in an ongoing technical practice. The Banff technical group disbanded after the project, and the cumulated expertise and software capability dispersed among the participating artists and researchers.
Within the context of the European Union I3 research networks, several ethnographers, sociologists and anthropologists have been carrying out field studies of contemporary technological art installations, aiming thereby to inform subsequent system and design practice. In an ethnography of visitors to the ZKM Media Museum, investigators chose to analyze media art works sociologically as "breaching experiments". With a technical goal to devise protocols for interoperability between different virtual environments, they studied "the sense of presence experienced by museum visitors", to better understand their "intersubjective organization".[74] These early results do not indicate whether or how findings would lead into the design phase.
Also in the past year, interdisciplinary humanities seminars have been held on "Computing science as a human science" at the University of Chicago, and on "Virtual reality, past and present", at Cornell. These seminars are intended to engage with the technical community, but do so still within the usual framework of critique. A newly announced program sponsored by Microsoft Corporation at Carnegie-Mellon University illustrates a more active model.
This pilot fellowship program will connect three established artists and a critic-historian-curator to the robust science-technology resources at Carnegie Mellon. The artists will:
1) engage contemporary science-technology as it provides tools, media, and content to their work,
2) assume leadership roles in generating and implementing complex, collaborative projects, and
3) connect the process of the projects and its results to the larger community. (www.cmu.edu/studio/)
Applied research combined with critical perspectives has been termed "critical technical practice" - another term, like "cultural informatics", that aims to create a new space for heterogeneous activity. [75] Still, very little of this community seems to be connected to or even aware of the potential resources and talents of the electronic art"community. This is a point we will return to in the report's conclusion
Broadening Public Awareness of Techno-Science
In an informal evaluation of the Wellcome Trust's Sci-Art program, Cohen noted the deep sense of urgency expressed by many of the applicants, that they felt the need to look outside the limitations built into their careers and institutions. "It may be too strong to say that they felt some kind of moral imperative…it is rather that they appeared to feel that the boundaries of their discipline were (and indeed are) weakening at the edges, that people from outside were doing work similar to their own, and that by moving outside the discipline, they may be rewarded by a new perspective and new ways of thinking about their subject"[76].
If this type of program has indeed struck a nerve, it would be worth considering how it might be made more accessible beyond the U.K. While the outcomes of such collaborations can clearly be very broad, here it is worth underlining the potential contribution to public discourse about scientific and technological issues.
Two final points to close this discussion: As we have seen previously, artists are increasingly attracted to the horizons of bio-medical and evolutionary computation. The ethical quandaries arising from these fields may perhaps be as well articulated and illustrated through the kinds of expressive collaborations with scientists that are nurtured through schemes like the Wellcome Trust’s Sci-Art. Second, providing a more variegated sense of the so-called "hard" professions of science and technology, might influence young people to conceive of these professions in new, more nuanced ways than tends to be the case. To close with an anecdote: one of the most gifted female computer graphics systems programmers began her higher education at art school in Canada. After seeing the early computer animated film "Hunger", she decided to train in computer science, in order to create better tools for artists.
This report has attempted to present a multi-perspective framework from which to view the rising density of communication between the worlds of art, technology, and science. Designating the "site" of this hybrid activity as the studio-laboratory, the first section traced the development of such organizations historically, compared their dynamics to that of "transdisciplinary" knowledge production in science and technology, and argued that they foster incremental, radical and systemic innovation. By its boundary-spanning nature, a good deal of this activity stretches the limits of established paradigms, whether these be considered from the techno-economic, social or aesthetic standpoint.
The survey of current studio-labs revealed a number of commonalties with Gibbons’ description of "mode 2" knowledge production. The assembly of scientist-artist-engineer teams usually takes place in a specific context of application, which can range widely from art commission to teams of more or less equal artist-scientist researchers. In many cases, the crucial collaborative communication still takes place in face-to-face encounters, as a rule laboratory or production rather than seminar/theoretical settings. Where distant teams work on common projects, periods of intensive "residential" development are interspersed with tasks still often divided by discipline. This makes particular sense for cyclical, iterative projects, like system design and development, where the learning by using can only go on so long before major overhauls are needed. The temporary media lab notion is the most lightweight version of the contingent manner of organizing the conjuncture of artists, programmers, and theorists; it contrasts with the high-overhead, large-permanent staffing of the centres like the ZKM or IRCAM.
With the price to performance ration of commodity hardware continuing to decline, specialized equipment is becoming less critical to the studio-lab than the range of collaborative dynamics they can accommodate. Individual artists are, more and more, acquiring effective home-based studios which even five years ago were rare outside high end labs or commercial facilities. What we have learned through our survey, however, is that much of the innovation emerging from both the older and more recently founded structures takes place in the flesh, within particular settings, whether these be temporary special events, industrial labs, cultural centres, or universities.
How the specificities of particular studio-labs relate to the "system of innovation" in which they function is a rich subject for further study. As we have seen, a dialogue is already occurring in the E.U. between the arts/cultural sector, industry, and university researchers, and new mechanisms are being devised to turn that dialogue to action. In North America, there are no large scale public-oriented studio-labs operating with the kind of ongoing government sponsorship found in Europe, or corporate sponsorship as in Japan. But the tremendous dynamism of the U.S. information/media sectors generates lots of "studio-lab" activity which could not be addressed in this report; for instance, Intel’s support for artists working in a variety of university labs, or Disney Corporation’s now very substantial scientific research department. In the specific U.S. setting (and to a lesser degree in Canada), the difficulty seems to be less about attracting corporations to finance educational facilities with hardware/software; the more important dilemmas arise over the strings attached to such sponsorship. For this reason, the key question in the North American context will turn on how independent media labs can be sustained, whether on campuses, through ‘enlightened’ corporate programs like Xerox, or, what has been less attempted on this continent, building onto existing cultural infrastructures like museums or theatres. Clearly, this particular discussion will need to be framed broadly enough to bring industry, artist/designers, technology researchers and social/cultural theorists around the same table.
In our look at the studio-lab phenomenon, we have stressed that place still matters, perhaps even more now that communication is so deceptively ubiquitous. We have also made clear that the range of innovations coming from these sites falls into all four of the classes described by Freeman. What is less clear, from a policy standpoint, is whether all should be equally supported, or greater efforts be concentrated towards a few. This question will, naturally, be answered differently in the developing world, where the incremental integration of digital with older, locally-specific forms of media may be the soundest way to start building up a broadly based innovative capacity.
Also, from a policy perspective, it is important to think of the cultural shape of future digital media in terms of the accumulation of expressive traditions: ancient and modern, individual and collective, purely informational and materially embodied. Support for "projects", valuable as they will invariably be, should nonetheless be understood in these larger terms. From this assumption, though, arises yet further questions: what models of studio labs fit best into which national innovation context?
The third chapter examined this framework through the prism of five discussion themes. Using the figure of Instruments of the imagination, the cybernetic art work was likened to previous representational dispositifs -- mediating devices or boundary objects between the sensorium and a "natural" world ever more saturated by artifice. Creative users extends the much-studied user-producer relationship to consider the artist as a kind of user-to-come, a necessary extension where the field of innovation is a fast-evolving symbolic environment. Seeing the artist as a cognitive pioneer only, we suggest, weighs too heavily on the side of theory; learning through using is how artists have always fashioned their poised balance between form and content, technique and idea.
Access, it was suggested, has become a leaky portmanteau term -- carrying all freight but delivering little. Besides measures based on hardware, price, and intellectual coherence, access entails a new kind of fluency with the medium-specific traits of the computer; the build-up of such fluency may be less an individual trait, and more a function of networks (programmer, designer, artist, user). Reflexivity thematizes technical practice as socially situated. The distance between the worldviews of cultural and social theory, and those of the designer-engineer-artist, remains large but there are promising indications that insights between them are growing. Finally public awareness about techno-science may be enriched through more extensive art-science collaborations. Benefits include improved conceptual articulations and re-shaping of the image of professional practices.
Necessarily, a report of this nature leads more to openings than to prescriptions. More knowledge is needed about a host of issues and questions, a partial list of which includes:
- The structural viability and likely longevity of the new large-scale stand-alone centres for art and technology.
- The potential value of tactical and "temporary media lab" interventions in the developing world: in particular, what infrastructure and resources would be needed to encourage greater linkage between studio-laboratories in the developed and developing worlds.
- Widening awareness in the corporate world of the potential value of an engaged style of cultural support, modeled more on innovation than traditional notions of patronage.
- Whether networks of innovators, here characterized separately in terms of research, civil society, and art-production, can become more integrally connected.
- How best to advance a common pragmatic agenda for "cultural informatics", joining the concerns of social and cultural theory with the fields of computer engineering and software design.
Art historian Erwin Panofsky, writing about the Renaissance, attributed the flowering of the arts and the birth of observation-based science to new "transmission belts" that re-connected theory and practice, art and science, instrumentation and sense-perception.[77] At least as much may be at stake, five hundred years later, as we face the challenge of continually re-humanizing our technological world.
3. The Mass Media of Communication
The importance of the mass media lies in the fact that for the first time in human history the means exist for speedy total communication. The mass media provide the channels for full publicity. They constitute the basis for the rapid creation of a public, or of publics. This is the literal meaning of "publication." The mass media of communication must be of crucial significance for democracy. These techniques make it possible to define "the people" with new clarity, for they constitute an effective source of common experience. They greatly multiply the interconnections between individuals and among groups and correspondingly increase the need for conduct that takes account of other people. In nondemocratic forms of social organization the privileged ruling classes are protected by a curtain of privacy which shields their actions from general view. To be sure, mass communications may be used by tyrannical individuals or groups to increase deceptions and to compound injustices. But in the long run it would appear that these new media work in the direction of some sort of democracy, by making information available to everyone. This may not be ideal democracy, but it will be a form of social organization in which all the people must be reckoned with. Since under modern conditions the actions of all people in key positions of power and influence are thus likely to be known almost immediately by nearly everyone, these people cannot make decisions without reference to the reaction of the public. Hence the mass media produce a society in which all the people are at least tacitly consulted in the making of decisions of public consequence. In this sense they constitute an important democratizing force in the modern world.
It is not enough, however, that the mass media contribute to democracy. The crucial question is, what kind of democracy do they serve and promote? There is no doubt but that radio, television, newspapers, and all the other potent modern means of public-making create forms of association in which each person counts, in a way hitherto unknown. But to what end does he count? What is the animating spirit of those great new publics generated by the magic of the mechanical and electronic arts?
Three answers may be given to these questions. The first answer is that the mass media are tools for advancing the interests of those who control them. This concept of the purpose of mass communication is probably the one most widely held today -- though usually tacitly rather than explicitly. The channels of publicity, according to this view, are means of exerting influence, of getting people to believe and to act in ways the publicist desires. They are techniques for amplifying the power and range of the user’s words, so that he may (quite literally) have a greater voice in the conduct of human affairs. They are impersonal agencies for manipulating other people.
This first conception of the function of publicity is democratic only in the limited and perverse sense that the mass media create and influence whole publics, and that presumably every person is entitled to advance his interests in this way. But the publics thus created are not communities of free persons; they are masses of more or less identical psycho-physical objects pushed this way and that by the powerful purveyors of propaganda.
A second conception of the purpose of the mass media is, apparently at least, more benignly democratic than the first. This is the view that the function of the agencies of mass communication is to create and sustain a "popular culture." Now the goal is to serve the public’s interests, by supplying the people with what they want; it is not manipulation and control of the public by special interests. From this standpoint the people are consumers to be satisfied, rather than objects to be managed. There is always an author, an editor, or a performer who can represent every person and every kind of life, thus creating a great company of others who remind one that he is not alone and who give him assurance that what he does and approves is right. In this manner the powerful techniques of public-making have provided a major answer to the democratic demand for self-determination. While it is still not practicable for each person to do exactly what he wants, the mass media do contribute immeasurably to that self-justification which is the mainspring of the autonomous spirit. In this mass society every person -- a few misfits excepted -- can at last find public warrant for being or becoming whatever his heart desires.
Actually the craving for collective support for oneself is a sign of misgivings about one’s worth. Multiplication of this same self through mass identification does not produce personal strength, but only magnifies weakness. The pandering function of the mass media merely weakens human personality by fostering self-deception. The truth is that man is not and never can be really autonomous. He is not and never can be free to order existence to his heart’s desire. When he tries to do so, he is both resisted by the outward barriers to his asserted sovereignty and beset within by the sense of meaninglessness which comes from having no correspondence with the health-giving laws of life.
There is a close connection between the use of the mass media to advance special interests and their use to give the people what they want. When people live by the principle of want-satisfaction, they will employ any available means for acquiring the wanted objects. They will give honor, prestige, and power and will gladly subject themselves to those who will supply their cravings. In a society pervaded by the goals of consumption, those who seek power for themselves can also, by skillful psychological manipulation, create new wants, which they then proceed to satisfy, at a profit to themselves. A people whose highest goal is the freedom of personal gratification is thus most likely to be enslaved to those who produce and distribute the so-called "good things of life."
The third answer to the question about the purpose of the mass media and the nature of the publics created by them comes from affirming the democracy of worth instead of the democracy of desire. In this case the basic premise is that the organs of publicity exist to advance neither special interests nor public satisfaction, but solely the cause of excellence. Both those who publish and those who see or hear are committed to act and to judge in devotion to what is right and true. The process of communication is not simply a bipolar one between the publisher and his public, but is a triadic one involving also the controlling reality of truth, which transcends the participants and transforms the relationship between them.
The character of the mass media of communication and the purposes by which they are directed are, of course, of profound educational significance, chiefly because today they are among the most, if not the most, powerful and pervasive of all educational influences. Young people -- and older people, too -- are caught in an almost continual and inescapable barrage of sights and sounds from the various organs of publicity. Until recent years the average person had to seek out sources of information and entertainment. Now he has to seek refuge from their omnipresent importunity. Whether he wills it or not, every person is, as it were, bathed in a flood of symbols pouring in from the mass media -- music, news, sports, weather and market information. Onto the time-honored stimuli of the natural and social environments have been superimposed the more insistent stimuli of this new symbolic environment.
To a considerable extent the broadcasters and publishers are the leading educators of our day. It is they, perhaps more than schoolteachers and parents, who set the intellectual and moral tone of the society and suggest the values that shall govern the conduct of life. Perhaps the mass media are the real public schools -- the institutions in which the public is not only taught but brought into being as a public.
The public channels of communication are educationally important also because they provide a wealth of teaching materials and models for parents and teachers. The teacher is no longer one whose main function is to impart information, which is so abundantly available and attractively arranged in a variety of published forms. The function of teaching has become one of selection, evaluation, interpretation, application, and individual guidance. To put it another way, the mass media have shifted the emphasis in education from teaching to learning, because they offer at least the possibility of such rich resources of well-organized, authoritative, and cogent materials for learning that students need only the time and the incentive to learn. Again, this is to say that the most influential and important teachers, to some extent today and even more so tomorrow, are those who, speak and write for the mass media. Nonetheless a continuing and increasingly important task of ordinary teachers and parents will be to develop in young people the trained perception and critical judgment that will enable them to use published materials profitably and responsibly.
One further important link between education and the mass media is the fact that authors, broadcasters, advertisers, and others who speak through the public channels are nurtured in homes and schools. Thus, the traditional institutions of education may help to determine the character and the purposes of what is done via the newer agencies. In a healthy society the influences of homes and schools should complement and sustain those of the mass media, and vice versa, replacing the chaotic and frequently antagonistic relationships that now so largely prevail.
We turn now to a consideration of some of the principles that need to be observed if the mass media of communication are to contribute to a democracy of worth. As a preliminary to this analysis, however, it will be necessary to discuss the major antidemocratic consequences resulting from the use of these techniques. Along with the democratization of sorts inherent in the creation of comprehensive publics, there are also contrary, potentially undemocratic tendencies. These follow from the high cost of production in the mass media. Considerable equipment is required to print and market newspapers, books, and magazines, to make radio or television broadcasts, and to produce motion pictures. While per capita costs of mass-produced items are low, because of the large numbers of people involved, the cost per issue or per program is normally high. As a result, considerable concentrations of wealth and power are required by the mass media of communication. It is not possible for anyone who wishes to do so to create a public. The privilege of publication is limited to those who command the requisite resources of money and position.
These simple economic and political facts underlie the antidemocratic potentiality of the mass media. The ability of a relatively few already powerful people or organizations to exert still further pervasive influence introduces the possibility of tyranny and misuse of power in some respects even more devastating than that accomplished by physical compulsion. To hold the mind and imagination of a public in subjection is more injurious to their dignity as free persons than bodily restrictions would be.
These undemocratic tendencies and dangers can be counteracted. The mass media are not necessarily contrary to democracy. They can and should contribute to human freedom and justice in a democracy of worth. What is required is the public regulation of the mass media, by reference to standards of worth, in such a manner as to prevent their arbitrary employment for the advantage of private interests, either through deliberate manipulation or through giving the public what it thinks it wants.
The use of the mass media in a democracy of worth is based on four principles. The first principle is freedom of speech. If truth is to be known and right is to be done, there must be opportunity for exploration and for search, hence for diversity of beliefs and for the public expression of this diversity. The basic assumption of the free and open society is that no one can speak about the true and the right with final and full authority. There must be no official public view to which all are obliged to hold and from which no variance is to be permitted. It follows that the mass media should be organized so as to permit and encourage the creation of many publics. A single system of production and distribution, resulting in the making of a single public, would destroy the contrast and the variation that are the source of cultural enrichment and social progress. In other words, democracy should be pluralistic. A monolithic society, consisting of only one public, is a threat to truth and justice. Freedom of publication is a prerequisite for this necessary pluralism.
Freedom of speech is not, however, without its conditions and limitations; it is not absolute and unconditional. It is founded on the presumption of good faith in those who publish. It is one thing to defend plurality on the ground that no one can claim complete knowledge of the good and the true. It is something else to uphold it from the point of view of the demand for individual autonomy. To stand for freedom in the name of a truth that is beyond mortal reach is different from defending it for the sake of personal license. In this contrast lies the clue to what is meant by "good faith." Good faith is faith in the good. It is action predicated on loyalty to the good.
Thus, to the first principle must be added a second -- namely, the principle of regulation. It is in tension with the principle of freedom, of speech, not as total contradiction, but as partial limitation. It sets bounds to freedom. While plurality of published influences is desirable in order to allow for criticism and improvement, not any and every influence may be permitted. Any society needs some minimal standards which prescribe in broad terms the range of permissible public communications. Such definite judgments are necessary because even people who are committed to the good are never completely devoted to it. A society organized on the basis of dedication to excellence as an unargued presupposition is made up of people none of whom actually fulfills that ideal. There are also people in such societies who do not even nominally profess or assume any such allegiance to values and who pursue their autonomy, relying on the good faith of those who are dedicated to the right.
Whether the regulation shall be narrow or broad depends chiefly upon the degree to which the members of the society are actively and consciously devoted to the good. When such devotion is nominally assumed but is actually weak, it is necessary to set up stringent legalistic regulations which define within a narrow range the allowable forms of published and broadcast materials. When loyalty to the good is actually widespread and strong, social controls on what is communicated may be correspondingly relaxed.
Regulation of the mass media is not practiced solely in a democracy of worth. Such controls are also the main reliance of non-democratic social orders -- the means by which the techniques of public-making are reserved for the special purposes of those who hold the reins of power. Regulation is likewise essential in the democracy of desire, as the basis for insuring the social peace and cooperation necessary to satisfy the maximum number of interests. Though undemocratic societies, democracies of desire, and democracies of worth all must regulate the mass media, the nature and source of the regulations are different. In the first two, the controls are based on considerations of efficiency and expediency: in the one case for maintaining inequality of power; in the other, for distributing and equalizing power. In the third society the controls are based not on preservation or accommodation of interests, but entirely on value considerations -- on right, justice, and qualitative excellence.
Every society censors communications that would immediately en. danger the security and safety of the public. For example, use of the mass media to incite rebellion against the established government or the publication of military or diplomatic secrets are obviously inadmissible in any kind of society, on the grounds of corporate self-preservation. Other matters that would be repugnant to most people, such as gross misrepresentation of facts important to health and safety, or public displays of vicious and immoral conduct, would also normally be prohibited by law.
The question then arises of who should do the regulating. Ultimately the responsibility lies with the agencies of government. The courts may adjudicate complaints brought against publishers or broadcasters, legislatures may prescribe the limits within which freedom of speech is allowed, and other government agencies may exercise regulatory powers through the granting and withholding of licenses.
The extent to which such government control is required depends upon the degree to which regulation is privately and voluntarily effected. In a society where the people are widely committed to the right rather than to the advancement of their own interests and the satisfaction of their own desires, the censoring functions of government can be reserved for the occasional serious offender who escapes other controls. When voluntary private regulation is weak, strong government restrictions are required.
The producers of the mass media may regulate themselves through their own associations, both on an advisory basis and by invoking sanctions on those who stray beyond the bounds agreed upon. Individual producers may also regulate themselves, in the light of standards of excellence to which they have pledged their loyalty. Such self-control does not really belong under the principle of regulation at all, for it is simply the responsible exercise of freedom. This indicates that ideally freedom and regulation are not in any way opposed to each other. In fact, to be truly free is to regulate one’s conduct in accordance with the good. The principle of regulation is contrary to the principle of freedom only when freedom is taken in the sense of autonomy.
The distributors of the materials of mass communication are another important means of control. For example, subscription agents and booksellers can to some extent choose what they will and will not sell and to whom. Motion picture theaters can sometimes determine the films to be shown, and they can in certain cases restrict the viewing of films to appropriate persons (for example, to adults). Since television and radio programs, on the other hand, are open to everyone without limitation, it is necessary to maintain a more broadly applicable standard of public propriety than applies to the other forms of mass communication.
Finally, voluntary regulation of the mass media may be exercised by the receiving public as individuals and as groups. Voluntary associations such as churches and clubs may adopt their own standards of quality and may employ their own corporate disciplines to enforce the observance of these standards. Of special importance in this respect are families, in which books, newspapers, magazines, movies, and broadcasts may be chosen with reference to standards of worth considerably different from and higher than those generally prevailing. Family standards are continually subject to erosion from the inflow of debased materials from the mass media -- as in the brutality and immorality of many of the so-called "comic" books, the triviality, sensationalism, and distortion of most journalism, and the preoccupation with crime and violence in many television programs. It is the obligation of parents to maintain at least minimal standards in the home by appropriate regulation of the reading, listening, and viewing diets of their children.
Similarly, libraries and museums may regulate the quality of the materials acquired and the manner of their use by the public. Schools, too, play an important part in the selection of published materials, both in choosing what is used in regular instruction and in influencing students’ habits of seeing and listening.
The ultimate goal of control of the mass media is to educate the public in self-regulation -- to develop in all the people, whether producers or recipients, a reliable sense of what is worthy and what is not worthy of being made public. In this manner the principle of regulation supports and confirms the principle of freedom.
The third principle for the mass media in a democracy of worth is that of social support for excellence. In a democracy of desire or in an undemocratic system, where mass communications are used to advance the interests of individuals, of groups, or of the people as a whole, excellence is at best a by-product. There is no necessary relation between true worth and the satisfaction of wants. It is not often likely to be to the advantage of a newspaper publisher, for example, to print the whole truth or to present the most searching analyses of the news. Nor, apparently, can movie, radio, and television producers normally afford to offer a steady flow of high-level programs. Under these conditions, while materials of great worth may be produced, their appearance is fortuitous and sporadic and their tenure precarious. Furthermore, the very excellence of the offerings is compromised and tainted by their subordination to the interests they are used to serve.
The predominance of commercial support for the mass media in the United States is evidence that in this field a democracy of desire prevails. Newspapers, magazines, radio, and television are largely supported by the sale of advertising. Hence the nature of what is communicated is mainly determined by what will sell products. Each commercial sponsor tries to present whatever will please the most people among those who are in the market for his product. If he can make use of snob appeal to sell automobiles, washing machines, or beer, he may sponsor a symphony orchestra, but if a sordid murder drama would commend itself to a larger public, he will present that instead. Under this system there is no commitment to excellence as such, but only as it may happen to be a useful tool of product promotion.
By contrast, in a democracy of worth, excellence is the direct and primary aim, to which other considerations are subordinate. In such a society the means of mass communication are given direct social support for the publication and broadcasting of excellent materials. The money and the manpower are provided specifically to accomplish these beneficent purposes, which are of such great importance for public well-being. Thus, dependence upon organizations whose primary purposes are entirely other than public communication is avoided.
The principle of social support for excellence may be carried out by a number of different means. The most obvious way is for the government to operate its own general press, radio, and television services for the public good. This has not been done in the United States by the federal government, presumably because such activities are not included among the powers specifically assigned to it by the Constitution. However, some of the state and municipal governments have entered the broadcasting field in a small way. In many other countries government-controlled mass media are the rule. In some cases -- to public well-being have been made. In other cases the dangers to liberty in government-operated press and radio have become evident. This peril is most ominous when the government has a monopoly of the mass media. As pointed out earlier, democratic freedom depends on a plurality of public-making agencies. Even though the official media of communication may in principle be devoted to the good, those who operate them are never wise enough or good enough to be the exclusive architects of the public mind. Thus, while a case for government-supported mass media may be made, other independent agencies should coexist in the field with them, and safeguards should be provided in the system of government to insure that high professional standards prevail and that the media are used for the general good rather than for partisan advantage.
A second means of providing social support for excellence in the mass media is through private, nonprofit organizations devoted directly and exclusively to the mass production and distribution of high-quality communicated materials. Examples of such agencies are the noncommercial educational radio and television stations and certain nonprofit publishers and film makers.
Third, there are commercial mass media devoted to excellence and supported by direct consumer purchase of the materials produced rather than indirectly by the sale of unrelated advertised products. The highest-grade newspapers, magazines, and books are supported by a reading public dedicated to excellence. The success of such publishing enterprises depends upon a widely diffused will to truth and a widespread interest in significant cultural attainments. The plan of subscription television rests on the application of the same rationale to this newer medium of communication. Whatever its shortcomings in other respects, the idea has the great merit of establishing a direct relation between product and purchaser, thus creating the means of responding to a substantial demand for programs of consistently high quality.
Finally, the social support for excellence can be accomplished through the various institutions of formal education. As pointed out earlier, the mass media are now in fact, if not in name, the most powerful of all the agencies of education. Their function is to disseminate widely the resources of culture by means of words and images. This is also the primary function of schools. By long tradition the schools are deliberately responsive to the claims of truth and of other ideals of excellence. The mass media, as now organized, have no such generally acknowledged objective. This is understandable in a society organized on the basis of expediency, but not in one dedicated to the realization of values. In a democracy of worth mass media ought to be frankly regarded as agencies of education and should be made an integral part of the work of the institutions of education. The making public of information and even of entertainment is a natural and proper extension of the function of regular educational agencies.
A growing recognition of the educative role of the mass media may result in profound changes in both the schools and the agencies of mass communication. There is no reason, for example, why outstanding teachers cannot make valuable materials for learning at every level available by press, radio, and television to the public at large. With such public accessibility of the materials of instruction, the emphasis of teachers in school classrooms may shift considerably. Guidance, testing, and individual application can largely take the place of presentation of learning materials by the teacher. The primary function of most teachers should be to stimulate and channel the students’ dedication to make use of the abundant resources available through modern techniques of symbolic reproduction and distribution.
The great universities should become centers of public education in a new sense. They should not merely cherish their own intellectual life, serving only those who come to them for instruction. They should become major centers of mass communication, carrying on a continuous work of adult education of the public in the letters, sciences, and arts, by printed publications, by motion pictures, and by radio and television broadcasts. For this work they should receive the substantial material support that would be required to do the job at a high level of competence. In this way the institutions of education could admirably exemplify the principle of social support for excellence.
The fourth principle of democracy in mass communication is that of criticism, or of evaluative response by the receiving public. Only by criticism can the one-directional nature of mass communication be overcome.
Criticism may be accomplished in several ways. The first way is direct communication of the individual with the author, publisher, or producer. A relatively small number of thoughtful letters or conversations may have a significant influence on the quality of what is published. A second mode of criticism is the regular publication of reviews by expert critics. Evaluations by such reviewers have considerable effect upon the professional standing of authors and producers and in the formation of public opinion. They are particularly essential in a society devoted to values, to keep before the public a clear vision of ideal ends to be served and to show explicitly in what respects materials offered for public reception do or do not measure up to these standards. Third, criticism can be accomplished implicitly by the publication of material that acts as a countervailing influence. Fourth, indirect and inarticulate, but nevertheless effective, criticism may be effected by giving or withholding support for the agencies of mass communication. With commercial mass media supported by advertising, the individual consumer may respond by purchasing or not purchasing the advertiser’s products. This is obviously a cumbersome and certain mode of expressing evaluations. With government controlled media, criticism must take place through the regular political channels. In the case of mass media supported by private philanthropy or by the sale of materials to the public, criticism is exercised directly and powerfully on an economic basis.
It is with respect to the critical function that the pertinence of the institutions of education to the mass media of communication is perhaps most evident. Criticism is integral to the educative process. It is an essential feature of good practice in schools, colleges, and universities. When mass communication occurs under the auspices of nonschool agencies, criticism is an extrinsic function -- an activity carried on by interested outsiders who wish to have a part in determining the nature of what is made public. When the mass media are an arm of the schools, the critical function is intrinsic, since self-appraising, reflective activity is an essential feature of education. It is this self-evaluative function that makes the institutions of education uniquely appropriate as centers for mass communication in a democracy of worth.
4. The term 'interprofessional' refers to “Occasions when two or more professions learn ... patient safety and communication among health care providers; ... work together in the most effective and efficient way so that they can ... “mutual understanding of and respect for the contributions of various disciplines.”
Teaching is a kind of dialogue, teaching methods, teaching and teaching the relationship between cognitive style, but a spirit of teaching. A typical feature is shared intellectual, promote understanding and meaning of creation. Their performance in three forms: teaching the subject and the text of the propaganda program, teaching subjects and teaching the main subject of the dialogue and the teaching of self-dialogue.
In recent years, especially after the official launch of the new curriculum reform, education and learning philosophy, literature, the idea of dialogue, initiated a dialogue on teaching research, advocacy dialogue teaching. Its direct effect is that many teaching theory researchers and teachers are in the forefront of the practice of "dialogue", "teaching dialogue", "dialogue style" of education and other related terms and references are no longer strangers. However, people began to frequent use, and strongly advocated the teaching dialogue, I also discovered that many people understood only from the literal meaning of the teaching dialogue, there will be suspected of teaching dialogue, narrow-based. So, what is the teaching dialogue? The essence of its nature and the typical characteristics of that? What are its manifestations? The theory to clarify these questions will help teachers to establish a correct concept of teaching and learning dialogue and to pursue their ideals in practice form.
1, teaching content of the dialogue
In linguistics, dialogue and "Monologue", "herself" as opposed to two or more persons and oral communication between the conversation. In philosophy, literature, in addition to people's direct discourse, the dialogue is also an indirect way of thinking that the collision between people. This dialogue occurred mainly in the spirit of human beings between products is produced by one person against another person's spiritual products that the text read, understood, critical, etc. started. So, a modern people can not only never met with another of the modern people dialogue, even with the ancients never known one another for a dialogue. Today, the idea of dialogue has penetrated into all spheres of society. From everyday life to non-daily life, from the life-world to the scientific world, from the academic to the political field, from domestic issues to international affairs, no no dialogue.
The idea of dialogue to permeate all areas of society while also penetrate into the field of education, and teaching activities with the main channel ---- "marriage", has become an important spiritual teaching philosophy and teaching. Teaching the idea of dialogue is becoming increasingly prominent.
So, what is the teaching dialogue? I found that people tend to understand the different levels of "teaching dialogue."
1. Teaching dialogue as a teaching method. This understanding emphasizes the teaching of dialogue and common "presence", the students active participation and active involvement of other associated. For example, some teachers feel that the teaching dialogue is in teaching students to organize discussions of teaching pedagogy that is a typical manifestation of the dialogue.
2. Teaching dialogue as a teaching relationship. Such an interpretation is designed to break through the traditional dualism of subject and object in an object-oriented thinking, that means that the teaching of teaching intersubjective dialogue of democracy, equality, mutual trust, mutual understanding, which means the teaching authority of teachers and students in the Center for digestion. It allows user-friendly teacher-student relationship, so that the relationship between the teaching of thinking in underlines.
3. Teaching dialogue as a teaching cognitive style. Such an interpretation is advocated that knowledge is not being taught, be copied, have been repeatedly reproduced, but during the dialogue constantly being constructed and creation, the teaching process is a construct, and the creation of knowledge.
It should be said, that understanding has some truth, but it is also incomplete and less profound. In my opinion, teaching the spirit of dialogue is a kind of teaching, which runs through all aspects of teaching and elements into. So, the dialogue is not only teaching students the means and methods of understanding the world is not just a characterization of the ideal teacher-student relationship, it is the existence of teaching is the soul of teaching and learning activities. Teachers and students in the teaching of dialogue constantly moving, constantly out of their original narrow vision, access to knowledge in the world, curriculum meaning, emotion and other aspects of life, multi-level and creative understanding, so that both sides as a complete person continued access to development and liberation, in particular, is to obtain a complete formation of the spiritual world. People's liberation and freedom is the fundamental purport of the teaching of dialogue and the ultimate objective. Can be seen that the dialogue is entirely different from teaching in teaching the general sense among the main language of conversation and questions and answers, despite the language to talk and Q & A dialogue in teaching is very important and indispensable, but they are likely to only have the form of a dialogue, but failed to reach both spiritual and intellectual depth communication and exchange. Therefore, not all of the language teaching conversation and Q & A acts are genuine dialogue. Teaching Dialogue means both the inner world of the open, which means the attitude of the parties in good faith with each other to listen. Mutual acceptance, which means the two sides in the process of acceptance and pour out the spirit to achieve the "encounter", integration and generation.
Based on the above understanding, I think, as a spiritual teaching, teaching dialogue is the main means of teaching subject and teaching, and curriculum text, between the mutual and self-respect, trust, equality and positions, using language (tell each other, listening, debate and challenge, etc.) and non-verbal signs such as a vehicle for achieving the intellectual sharing, two-way understanding and meaning of creation.
Second, the typical characteristics of teaching dialogue
Further, as the medium of teaching the spirit of dialogue, with the following three typical characteristics:
1. Sharing Knowledge
Some scholars believe that the sharing between teachers and students off: the Department, "is both a cultural sharing, namely, the identity of teachers as an educator who bring knowledge, ideas, wisdom, experience and other cultural achievements available to teach to the students, rather than , teacher-student communication through dialogue, both sides get new knowledge, 'an effort to raise; is also a responsibility of sharing, that is, shared teaching duties, teachers and students together for educational success or failure of responsible; even more the spirit of sharing, that is, teachers and students to pass each other, understand and feel the same spiritual experience. through these types of sharing, teachers and students can truly become a 'same boat' (sinkorswimtogether) of the people ".... In my opinion, teaching first and foremost a dialogue between the teaching of the main intellectual sharing. Dialogue in teaching, each subject has a different personality, thoughts and feelings, life experience and experience, they are right from their own life, world, life, thoughts, and feelings such as the views and experience the unique feelings start to participate in their own unique way dialogue, free to express their understanding and knowledge of curriculum text. Through this exchange and collisions, the main body of teaching each other to attract and acquire the knowledge owned by each other's experience, wisdom, ideas, personality spirit, so as to reach the teaching of intellectual inter-subjective sharing of knowledge and experience to implement each of the growth of the main body of teaching, thought the wisdom of The expansion and upgrading of the spirit of personality. So, teaching dialogue so that each teaching subject have been overcome and digestion of individual intellectual; self-centered nature of dogmatic and narrow-mindedness at the same time, to reach individuals with unique intellectual integrity, unity and richness of unity. '
2. Promoting understanding
Teaching the dialogue is not a party to overcome the other, one party will impose their views and opinions the other process; but to promote the teaching of the main rooms continue to reach mutual understanding and self-understanding process.
Mutual understanding of the teaching of the main question that is involved in teaching both sides of the dialogue continued into each other's ideas in the world, with the other party spirit, "encounter" and the communication process, the two sides reached a consensus on a curriculum topic. In fact, this is only the main dialogue and mutual understanding of teaching as a means ---- cognitive understanding. In my opinion, teaching subjects in the dialogue to achieve mutual understanding means: the cognitive, teaching a course on the dialogue between the parties to reach a consensus understanding of the text; in the case of a consensus could not be reached, but also can stand each other's positions on look at the problem, be able to "Care" or "empathy" approach to the different perspectives and opinions to maintain the attitude of tolerance and support. On power, the teaching dialogue, both sides respect each other's legitimate needs, motives, rights and freedoms, and no participation in the dialogue has the right to fully express their views on the course, whether he was a student or teacher, but also whether he is a cadre of students students with outstanding performances or ordinary students, poor academic performance of students. Teaching in real dialogue, "you" and "I" have equal rights, are subjectivity and intersubjective unity. Ethical in terms of personality, teaching dialogue, both sides maintaining their own individuality and independence, while better grasp of and respect for each other's individuality and uniqueness; in the real talk and performance of "I" at the same time, able to accept and affirm as the whole "you"; be able to understand the dialogue on the subject of any cognitive or behavior has its causes and reasons (regardless of whether it is reasonable), but also through dialogue and make irrational cognition and behavior in their self-understanding and awareness of the subject in the initiative to digestion and refactoring.
Teaching Dialogue is not only to mutual understanding between the teaching of the main possible also been teaching the subject to achieve self-understanding. Understand that by no means alone, outside the individual's own understanding of the object ---- the world, life, life and others such as spectators and meditation calm, understanding people, in essence, is always a kind of self-understanding, is the individual's self-awareness, an ability to read and the other hand, since the photo. Dialogue in teaching, each one interlocutor listening to others and tell to others, they need to look at other people's insights and perspectives, but also examine and contrast their ideas and experience. Individuals in comparison with others, constantly aware of their rationality and shortcomings and are actively absorb other people's rationality and constantly self-correcting to achieve self-development. This process is actually the teaching of the main constant in the dialogue process to achieve self-understanding. The dialogue in the teaching of individual self-understanding of the depth and degree of conscious dialogue depends largely on the subject of self-reflection of teaching awareness and analytical power. Whether other people's criticism of his own (of course, is well-intentioned) or their own self-conscious reflection in the dialogue, can promote the teaching of the main through dialogue and a better self-understanding and on this basis, towards self-transcendence. Reposted elsewhere in the paper for free download http://www.hi138.com
3. Creation of meaning
Teaching dialogue "is not to copy nature, mechanical in nature, but rather a productive, create a nature process." It is the teaching of the main work together to create life and build a new curriculum meaningful process.
(1) teaching dialogue is generative. Teaching dialogue is open, free, uncertain. Instruction in the teaching of the main dialogue is started between the generation of activities, teachers can not be good all the possible pre-planned goals and the direction of the dialogue, but also difficult to foresee that there will be what the outcome. For this, we is not difficult to understand. Although the teachers have to design teaching each lesson the content of the dialogue, theme and the basic process, but in the actual teaching dialogue, teachers can not "stop" living in a student's time to ask questions, but also impossible to accurately predict what kind of student opinion in the dialogue will be presented and, therefore, unable to dialogue is always "control" in the scheduled process and produce the desired results. On the contrary, if a teacher is always an attempt to control the theme of the dialogue, the process and even the conclusion of the dialogue is always hope that the teaching in accordance with the design of rigidly scheduled to start teaching intersubjectivity can not be a real dialogue. Teaching Dialogue is not only a generation, but also this generation is endless.
(2) teaching dialogue is the meaning of creation of the passers-by. In teaching dialogue, as the interlocutor is not individual programs to find and restore the original intent of the text, nor is it directly to the main body to follow the authority of a teaching speech, but to his own original text of vision and curriculum of teaching the history of vision as well as other practical subjects horizon to testify against each other, and mutual recognition; in teaching dialogue, every listener who has the right to say there is no clearly chosen to reflect the meanings that each participant has a responsibility to view their own thoughts, beliefs or prejudices behind the default. It is the dialogue in teaching the subject to testify against each other, mutual review and self-reflection, both sides have expanded their original vision to achieve a realistic vision and historical horizon, "you" event horizon and the "I" fusion of horizon, thus forming a new horizon. The new vision is a higher truth, because they "allow disparate sight into their own sight, and it is not critically damaged by alien sight or non-renewal of such critically horizon , but the concept of their own in their own sight to explain the differences among the visual field, thus giving it a new validity to accomplish this task. " As a result, the teaching process of dialogue that is constantly building and creating an individual give birth to the world of knowledge, curriculum texts and other new meaning and new understanding of the process.
Three manifestations of teaching dialogue
1. Teaching of the main text of the dialogue and Curriculum
As mentioned earlier, not only in the dialogue between people, but also in the spirit of human beings between products, teaching and curriculum of the main text of the dialogue is typical of the latter. In Hall's view, "the text is a kind of language", when people understand the text, it "like a 'you' like the self-talk, it is not an objective object, but rather the dialogue of another person," H ' . So, teachers and students in reading, understand the course text, you have established a dialogue with the relationship.
Students and curriculum establishment of dialogue between the text means to overturn the concept of traditional knowledge, which means the objectivity of knowledge and authoritative digestion. Knowledge is no longer a purely objective, universal and value-neutral, and its value lies not give ready-made things, but rather to provide Weiren continue to create starting point. Karl Popper (Popper, K.) Believes that all knowledge, in essence, is "speculative" knowledge, are some of the issues raised by our temporary answer, after raising activities need to be constantly revised and rebuttal. Therefore, there is no kind of knowledge can be acquired for the so-called scientific knowledge of the "ultimate explanation" is simply does not exist b '. Therefore, students to learn, understand the course text (course text is the carrier of knowledge) is no longer (not necessary) blindly reproduce, repeat, memory, creators of the original intent of the text, but a vigorous dialogue with another consciousness. In this sense, the students understand the text, the curriculum is always a dialogue and creative. "Understanding is not repeated, not a copy that person, understanding, to build their own ideas, their own content," "understanding is dynamic, with the creation of the nature of" C63. As students and curriculum creators of the text background of the times in life, thinking and understanding, language and even the aesthetic taste and so there are differences, students with its creativity through dialogue, not only beyond the limitations of history, but also break through my own small limitations. It is the reality of the students with the text of the history, modernity and tradition into dialogue and beyond, students achieve self-development.
Teacher understanding of the text of the curriculum In addition to the above-mentioned characteristics, also the teachers and curriculum materials for the process of dialogue between the designer. For a long time, teachers, teaching, such as construction artisans on the faithful in accordance with design drawings of construction, they are to teaching as the center, to teach reference as the "Bible" is the only reference material to teach sighted, always in accordance with the designer's thinking of course materials to carry out the teaching . Here, the teachers, curriculum materials designers thought was depression and restricts. Advocacy of teachers and curriculum text, dialogue, on the one hand expect our teachers to the curriculum in the dialogue to form their own unique understanding of the text; the other hand, is also expect our teachers to be able to boldly challenge course materials critical of the curriculum designers to understand the text, according to the students the characteristics of the situation and the implementation of their own teaching, so we really become the main courses and curricula reflect on the meaning of construction of those. Curriculum materials such as "music", every teacher will be based on his "music" should be understood, as well as their own "play" skills to truly empower the "music" second vitality.
2. The main subject of teaching and teaching dialogue
This includes between teachers and students, and Sang Sang dialogue. The dialogue between teachers and students means that the interaction between teachers and students education. It calls for "the education of educators must be educated as a 'dead', so as to re-educated the educated 'birth'. Meanwhile, he must also be educated to the recommendations: that he should be, as educators educators and the 'dead', to serve as educators of the educators and the 'born again'. " In other words, the dialogue between teachers and students, teachers and students are no longer being understanding and awareness, indoctrination and being spoon-fed the conquest and being conquered, but by each other educators, the "main" one "master" relationship The two sides are irreplaceable subjects. "Through dialogue, teachers, students (~ udents - of - the - teacher) and student teachers (teacher 1 0f - the - students) the words no longer exist, replaced by a new term : Teachers Students who (teacher - student) and students style teaching (students - teachers). Teachers no longer just to teach but also to be taught through dialogue, the students being taught, while also teaching.. ... .. In this process, the establishment of the 'authority' based on the argument are no longer effective; in order to work, the authority must support freedom, not against free. Here, no one to teach other people, no one is self-paced learning into, only to teach each other from start to finish. "text in the dialogue with teachers and curriculum in the status of construction of the main body of teachers (as opposed to designers in terms of course materials) is different from the teaching of the dialogue between teachers and students require teachers to the authority of the status of deconstruction and teachers centeredness, �� because, in the traditional monologue-style teaching, as opposed to students, teachers, played the principal role, as well as to suppress the student body of a reasonable play. The dialogue between teachers and students against teachers in the control of the students against the teacher's point of view prevail over the views of students on and advocate of equality between teachers and students of communication and exchange. Similarly, the Sang Sang also stressed that the dialogue between equality and academic success of students can not suppress the failure of school students, cadres, students can not suppress the students in general. In teaching and equal dialogue between the main body, it was said and the listener is constantly convert the teachers and every student both in listening and appreciation, but also in speech and evaluation. This resulted in an equal dialogue with teachers and students, a truly vital and common development, as well as between the "self-realization."
3. Teaching the subject of self-dialogue
Self-dialogue in the dialogue with himself, is an advanced form of in-depth dialogue. Freud believed that the psychological structure of each person there "self", the "I" and "superego", but man's psychological development is by these three "I" driven by contradictions and conflicts. In my opinion, these "I" of the contradictions and conflicts is a self-dialogue is realistic, "I" and the ideal "I" the past "I" and now "I" as being the observer, "I "and as an observer of the" I "of the dialogue is" I "and" Peter I "dialogue. Teaching activities should not only promote the teaching of the main curriculum and teaching as well as the main text of the dialogue, but also to promote the teaching of the main self-dialogue, and make it regular, conscious of, habits of effective teaching subject to rational understanding, appreciation and understanding of self - . In fact, teaching the subject of self-dialogue is their self-reflection. Whether teacher or student, they are both pointing to their own self-reflection in the dialogue of the curriculum to understand the meaning of the text whether rationality, creativity, uniqueness, they point to his own dialogue, whether the act in line with the spirit and principles of dialogue (if not overridden above on others, respect for others, personality, etc.), but also point to its own characteristics and style of teaching and learning; both point to themselves in the dialogue of the advantages and strengths, more importantly, refers to himself in the dialogue of the deficiencies and shortcomings. Teaching subjects through self-dialogue and reflection, and continuously from the one-sided, narrow-minded and dull towards a comprehensive, broad and spacious. Of course, teaching the subject of self-dialogue and reflection not only in the classroom after the dialogue, but also in moving the dialogue into classroom teaching. The latter enable the timely adjustment of the teaching of the main dialogue and behavior, improving the dialogue, the former will be the next classroom dialogue warning. It is the subject of teaching of self-dialogue to promote the sound development of classroom dialogue, and to improve the quality.
In the aforementioned three kinds of dialogue patterns, teaching the dialogue between the main body is the most visible, while the education and curriculum of the main text, and self-dialogue is a recessive character. But the three forms of teaching is closely linked to the dialogue, and can promote each other and conversion. Among them, the dialogue between the main body of teaching plays a key role, it can promote the teaching and curriculum of the main text, and self-dialogue. The teaching and curriculum of the main text, and self-dialogue, in turn, affect the teaching of the dominant inter-subjective dialogue, teaching them or constitute the main basis for dialogue between the dominant and prerequisite for, or through its process.
5. Incoherency in communications normally entails abstraction that refers to particulars, entailing a self possessed location dependant uniqueness, that fails in description . One might test for incoherency with the ability to visualize a corresponding representation of the referred topic. For instance the topic of color does not possess this capability where as the color red does. Though one also must attach to his representation a physical shape to his representation, if not just a flat plane or volume of some kind that is red in color. In this manuscript I wish to present an alternative and separate definition for incoherency that entails facets of the world, and the individuals place in it, such that seemingly coherent construable valid and logical deduction, universal, can emerge as meaningless with respect to life experience and survival. Oddly, the example used, Albert Einstein’s query concerning expansion verses contraction of the universe itself refers to potentially construable volumes of space although the universe, fitting into the category of abstraction, is a visually unconceivable entity. In this example I wish to demonstrate that it potentially possible to arrive at fairly certain argument suggesting that the universe must be either one or the other, expanding or contracting, but that this fact has no application, inclusively, to the endeavors of individuals or mankind, other than it’s emergence into the arguments presented here. In order to present my argument, new perspectives regarding history and human behavior, with semi-conceptual roots in ancient and medieval philosophical writings, will be introduced.
Discussion
I) Philosophy and History
A) George Berkeley, Boethius, abstraction, free will and human behavior (7)
George Berkeley (1,4) introduced the notion that the existence of matter is dependant on the existence of a perceiving witness and introduced from his belief an argument for the existence of an ever present all perceiving omnipotent deity. His argument focuses on abstraction, the ability to conceive, and language, having a sole utility that is based on circumstance, particulars, and associated willed behavior.
Centuries earlier than Berkeley, Boethius (11) ,discussing free will in the context of a deity assumed to have divine fore knowledge of events, in light of potential falseness in the self assumed right to punish civil offenders, suggested that knowledge was a property of the nature of the knower and that the existence of divine fore knowledge had no bearing on the free will of human beings.
In light of Berkeley and Boethius an unresolved paradox remains concerning point of reference and knowledge related to existence. Berkeley pointing out the importance of the first perspective with regards to abstraction, Boethius pointing out, employing lower animals as an example, that even validly construed knowledge was restricted also in a similar manner, to particulars/genus’s.
B) Albert Einstein and the Theory of Relativity
Albert Einstein (5,6) introduced a mathematical /philosophical concept of frame of reference/perspective with respect to the laws of nature with the creation of the special and general theory of relativity. His theorizations, incomplete, seeking a unified theory resulted in the creation of time-space concepts that distributed properties of the world into a general notion that allowed for a logical mathematical reality separate from human experience and a reality of human experience that was explainable by a subset of the mathematics and philosophy he introduced. Central to division, its’ resulting creation in the search for explanation division, is still a paradox he was unable to resolve. One might begin his inquiry into the evolution of ideas, with a question as to whether it is the same paradox, perplexities, that plague mankind.
II) The Paradox and Consciousness
I first want to introduce the notion that the existence of paradox is central to nature if not the only ingredient of life with regards to the self possessed awareness, consciousness of humans beings that construct notions and concepts; second, in elemental form emanating from a process of witness and transmission that entails a choice with respect to path that is common to all of nature-i.e. internal physiological/brain function and consciousness as parallel processes based on a rudimentary consciousness possessed by all.
Second, in re-examination of the theory of relativity, that it draws a concept of perspective from a sensory objectivity that there is a primary mathematical constant to nature that is based on a ratio of geometries rather than a ratio of ratio as a ratio. Meant by this, existing at the fringes of reality and question, is a reflection about a standard perspective. For example, two men traveling apart from one another at a given velocity, with respect to a third stationery party they should have velocities that are vectoraly addable if a standard perspective coherently exists. With respect to the philosophy of Berkeley, his communications are none other than conceptually oriented in this direction, disregard abstraction as incoherent, yet might be taken to be accepting of the notion of relative velocity if a conceive and add velocity( for example) identically. However he might had rejected the existence of unwitnessable electrons and photons with mass that traverse space, especially if they are conceived to exist universally in the absence of a perceiver. Boethius, speaking about its’ possessor , if taken in a strict sense leave no room to discuss the particulars of knowledge, but that all human beings possess it.
Third, I wish to introduce for comparison a separate notion of relativity, arising from the notions of Berkeley, Boethius and Einstein , excluding questions about a divinity, but considering consciousness , the sense of self, soul as a parallel fact to the existence of the brain and not a product of it alone. Assuming, that Einstein in his/mathematical/geometrical considerations are correct only with respect to a description of transmission of energy-i.e- E=mc^2 but c, if it is proposed that velocity exists only as common knowledge of its existence but has no coherent content in terms of mathematical comparisons/ratios exists only and can be held only as existing variable that is exempt from ratio-i.e. relates only to the possessor of the knowledge that construes it. Jokingly, but conceptually related, to satisfy the logic of both Berkeley and Boethius, if one were to maintain his correspondences, to construe a logical path from concepts involving the velocity of light, the theory of relativity to the practicalities of conscious experience, language usage, he might arrive at a the principle, in combination of these philosophical (indivisible) concepts that each individual would require the physical mechanical facilities to inspect the evidence from concepts that forecast (invisible) divides as a matter of the practicalities of existing: each attributed with his own knowledge depending on his nature. Though this is an exaggerated example with respect to the intended purpose to present a philosophical concept other than refute scientific method, one can ask about the meaning of scientific concepts, if they have no apparent practical meaning, can they also yield tangible coherent facts that ultimately relate to facts about experience, abstraction and knowledge instead of the physical world. With regards to philosophical proof, probably considered, ultimately more important, of the practical than scientific, it is not impossible to conceive of a wish for common, hands on experience to know the world.
As a distinct notion of relativity one can jump from this argument and hypothesis about consciousness to suggest, as the most important, that distance/velocity is ubiquitously relative, and that its’ conception, consciousness itself is a synergy of the detection of processes external to the self and parallel internal processes related connected with sequestering of processes, energies and forces brought about by the physiological; variable space, time, force, energy, as an elemental ubiquitous unit, each entailing the other/conceptually equal to one another as if the world an inflatable surface with a peculiarity that surface, entails force, entails energy, entails a contents of paths and courses that are all modulated/definable by the same non mathematical unity. Its’ procession, a total and contained by all the paths to present, such that the concept of soul, a world of individual uniqueness’s descended from it, is totally contained by it, and is yet is indivisible, immune to the genera established by scientific method-one results the world of the nominalist, excluded realists.
Related to the notion of relative distance, obviously, are the concepts of relative time, expansion verses contraction of the universe of the theory of relativity. If distance is relative, space/total volume of the universe can be either increasing or decreasing. The energy of light can be applied for the generation of space and mass with a remainder, or it can be construed, that light, in the process of transmission consumes space. If one considers history, all processes, as an unfolding, it is suggested that at each stage the material in the folds in some manner, undergoes a parallel change to changes in volume/space with expression, yet at the same time there is obviously new expression, addition, yet with a loss of total complexity/space.
If total energy is decreasing, as common sense might suggests either to assume that light loses energy to work (either with an increase or decrease in volume) A synergy and a whole new emerges at each step from the first person perspective that appears to be both lesser and greater than the parts, but is only an intuitively conceived notion of the mechanism of cognitive operation-one might compares from the act of perception a net decrease in lengths to produce a positive output that details the spaces he perceives, entailing secondarily a deception that, itself implies the consummation of resources/energy-i.e. a descending that is discernable only cognitively verses a positively oriented construction that is innately presented-an inverted product of processes that might be conjectured to be ubiquitously derived from inversions, that themselves, in opposition to the sensual perceptions of existence, have no self known orientation. It may be in this manner that the psychologies, physiologies in unison, of living systems, operate to construe and seek space, to abstract positive space in conditions of stress, leading to further abstractions that become oriented to uncover the deceptions of nature form this aggressive position but absent in realization that functioning also is, by means of the same deception that perplexes, also leads, via advanced problem solving and application to consume meaninglessly as if existing perplexity is threatening to instigate action regardless of its’ source. It may result that innate deception is an indicator of the state of living to which a response is elicited, when encountered, as if a threat to result in action, internally as a threat to initiate perpetuate function. Recent computer simulations are able to construct object representations from shadows and overlays extracted from visual/ positional information(2) suggesting that a perceptional representation of the world occurs in some form innately with the objects that are perceived.
It is thus proposed that the question of expansion verses abstraction is incoherent, may be undeterminable; one arrives at from a premise that values of velocity/distance are relative to those who possess knowledge of it, that new unique things are emerging, new complexity/ growth at the same time that he arrives at the principle that the world in the same process is decreased, in energy and complexity associated with its structure , is descending rather than ascending. It is interesting to note that chromosomes are reported recently to contract with the application of a pulling force(8), light can be captured at very low temperatures, relocated and identified at the same original light that as captured(10), can be trapped and found to rebound, refract and reflect in a closed container(12). The particulars of the world behave on there own in a manner that construations of them, parts of a world whole are observed as peculiar in nature themselves as is the whole, lending one, just from a statistical viewpoint to conjecture the irresolvable paradox as ubiquitous. A resulting futility descends into the realms of everyday life. It is adjunct to this notion that a common futility in daily life from public education and mass communications descends on the individual ultimately from attempts to construct natural law, order the environment, and is logically incoherent. Paramount to this is the coherent fact that the world may be logically expanding or contracting, though the question has no coherent meaning to the practicalities of life and survival of which there is no existing meaning able to exceed it/to evolve further divisions.
CONCLUSION: The “culture turn” is a dynamic process that since the nineteenth century has unfolded in the worlds of theory, art, and politics. The reference to a “culture turn” captures a widespread movement – played out differently in various disciplines, nations, and traditions – that emphasizes the importance of art and culture for education, moral growth, and social criticism and change. By the 1980s, this development led to an explosion in forms of “cultural studies,” “identity politics,” and “multiculturalism” in response to changes in the structure of capitalism and relationships among economic, cultural, and political institutions.
While the term “culture” is notoriously vague and complex, one might define it as the social process whereby people communicate meanings, make sense of their world, construct their identities, and define their beliefs and values. Far broader than the arts, culture is rather the entire field and process of symbol interaction, communication, and technologies through which people define and express themselves. Since its inception in ancient Greece, Western society has sharply distinguished culture from “nature” – a category that includes the physical world, nonhuman animals, and often human groups (e.g., Blacks, Jews, and women) viewed as “savage,” “barbaric,” or “subhuman.” Westerners – specifically, white, male, European elites – defined culture in opposition to nature. This binary logic was employed in order to construct human identity (by virtue of an alleged essence of “rationality”) as radically distinct from animals, to fulfill (European) humanity’s self-ascribed mission or purpose to conquer nature and establish “civilization,” and to assert their professed superiority to other groups marked as the “Other” in opposition to their role as Subject.
There are two broad ways to approach the study of culture. According to the idealist outlook that prevailed from Plato to Hegel in the nineteenth century, culture is defined as an ideal realm of thought and meaning independent of social dynamics and/or the vicissitudes of history. While societies may differ and change, metaphysical and moral standards of “truth” abide as eternal and universal ideals. Idealist outlooks failed to recognize that all forms of thought and culture change over time and are contingent constructs of their social context. Culture is a social and historical product that changes in relation to shifting material dynamics. As Louis Dupre deconstructs the universal biases and ahistorical and asocial ideology of idealism, "the very concept of culture as a realm of values independent of social-economic structures, into which man 'withdraws` from his daily occupations, is an ideology that could only arise in a compartmentalized society" (cited in Adamson 1985, p. 32).
In direct opposition to this idealist model, the materialist definition emerged in the mid-nineteenth century with the philosophy of Karl Marx. Reversing the logic of idealism, Marx argued that consciousness does not determine social being, rather social being determines consciousness. Fundamentally, human existence is rooted in the economic dynamics of trade, markets, and production. As soon as surplus production emerges in history, Marx argued, social classes arise and the struggle for power and resources becomes the driving force and “motor” of history. By way of a problematic architectural metaphor, Marx views production, economics, and technology as the “base” of society upon which all forms of thought, culture, politics, and law arise as a related “superstructure.” The ruling ideas of society are those of the ruling class, and they comprise an “ideology” – broadly, a conceptual outlook or worldview -- that advances elite interests and justifies class domination as good, natural, and the only possible social arrangement. But the dominant class worldview, Marx noted, is a biased distortion of reality and becomes a “false consciousness” for those who uncritically accept it as given, factual, and true. In reference to a key element of capitalist ideology, Marx described how the vast machinery of production spawns a “commodity fetishism” whereby objects (commodities) take on human-like qualities (assuming an apparent life of their own) and subjects (workers) become more and more like things integrated into technological systems. Bourgeois economists, themselves deluded by this alien “topsy-turvy” world, treated the commodity as if it were independent of social relationships and capitalist exploitation.
Marx’s often subtle analyses of the reciprocal interaction between the economic-technological “base” and the cultural-political “superstructure” were reduced to simplistic and reductionist formulas by many “Marxists” who failed to grasp the “relative autonomy” of culture and politics from capitalist imperatives (see Best 1995). For the “vulgar” or “mechanistic” form of Marxism, such as the official philosophy of the Second (1889-1916) and Third International (1919-1943) (including theorists like Karl Kautsky and Georgi Plekhanov), issues related to art, culture, ideology, and everyday life were ignored, trivialized, or simplified through the focus on economics and class struggle. In a fairly automatic manner, it was supposed, the inherent contradictions of capitalism and “laws of history” would lead to socialist revolution. Consequently, in Russia, China, and other communist societies, cultural questions were subordinated to work; ideology critique was devalued in favor of the “scientific” laws studied by “dialectical materialism”; concerns with subjectivity and everyday life were denounced as "bourgeois"; avant-garde modernist styles were pilloried as "decadent"; the sensuous and affective power of art was shunned as a threat to repressive asceticism and puritanical ideals; and “authentic” art was defined in terms of “socialist realism” that mythically glorified workers and reduced art to mere propaganda.
Beginning in the 1920s, Georg Lukács, Karl Korsch, and Antonio Gramsci renounced economism and scientism and emphasized the importance of subjectivity, culture, and ideology critique. They thereby inaugurated the fertile tradition of “Western Marxism” that defined itself in contrast to the sclerotic dogmas of Soviet Marxism. Western Marxists rejected the assumption that social change would come automatically through the “laws of history” and that revolution was possible without specific strategies to change and radicalize the consciousness of workers. Merging Marx’s theory of commodity fetishism and Max Weber's theory of rationalization, Lukács (1975) analyzed how commodity exchange had become the central organizing principle of twentieth century capitalism, permeating education, law, and culture generally. Such conditions hardly guaranteed the emergence of a revolutionary proletariat, but rather necessitated strategies to actively forge a revolutionary “class consciousness” through radical art, culture, and education. Similarly, Karl Korsch (1972) responded to the vulgarization of Marxism with a call to reestablish its philosophical relation to Hegel and to initiate a substantive political education of the working class before they could lead a successful revolution. Gramsci (1971) emphasized that the ruling class achieved dominance not only through coercion (e.g., violent attacks on striking workers), but also through consensus whereby people give assent to the powers that oppress them, viewing them as legitimate and inalterable. To undo the stranglehold of “cultural hegemony” disseminated through compulsory schooling, mass media, and popular culture, and to prepare the way for a mass insurrection, Gramsci sought to initiate a “counter-hegemony” struggle through radical education, interventions in capitalist-controlled media, and forging new cultures.
The critical rethinking process launched by Western Marxists was developed in fruitful ways by the “Frankfurt School.” Beginning in 1923, theorists including Max Horkheimer, Theodor Adorno, Herbert Marcuse, Leo Lowenthal, Erich Fromm, and Walter Benjamin formed the “Institute for Social Research” (see Wiggershaus 1994). The Frankfurt School abandoned the ahistorical, positivist, and disciplinary outlook of mainstream philosophy and social science in favor of a historical, critical, and interdisciplinary approach that analyzed the interrelationships among culture, technology, and the capitalist economy. Frankfurt School theorists synthesized political economy, sociology, history, and philosophy, with the first modern “cultural studies” that analyzed the social and ideological effects of mass culture and communications. Against staid, pseudo-objective forms of “traditional theory,” the Frankfurt School developed a “critical theory” distinguished by its practical and radical objective, namely, to emancipate human beings from conditions of domination. Recognizing the limitations of “orthodox” or “classical” Marxism, Frankfurt theorists developed a “neo-Marxist” orientation that retained basic Marxist theoretical and political premises, but supplemented the critique of capitalism with other perspectives, thereby spawning hybrid theories such as Freudo-Marxism, Marxist-feminism, and Marxist-existentialism.
With the menacing rise of Hitler and Nazism, Horkheimer, Adorno, and Marcuse fled Germany and settled in the United States. They analyzed how the US itself was becoming totalitarian with the rise of state-monopoly capitalism and the role played by mass culture and ideology in stabilizing crisis tendencies and shaping consent to domination. Moving from the control of production to the management of consumption, from the workplace to the home space and everyday life, capitalism had penetrated virtually all aspects of society and personal existence. Against the nightmarish backdrop of world wars, totalitarian communism, fascism, monopoly capitalism, new forms of social control, and the cooptation of the working class, Frankfurt School theorists were understandably pessimistic.
Thus, in The Dialectic of Enlightenment (1972), Adorno and Horkheimer argued that the powers of modern rationality, science, and technology championed by Enlightenment thinkers and Marxists led to domination not liberation. Building on a nineteenth century critique of “low culture,” extending Marx and Lukacs’s analysis of commodity fetishism, and developing Gramsci’s concept of culture as a form of hegemony, Adorno and Horkheimer described how culture had become integrated into the economy and a new “culture industry” emerged. An apparent oxymoron, their notion of “culture industry” showed how capitalism had colonized culture and everyday life, how the integrity and uniqueness of an artwork became obliterated in conditions of mass production, how the intrinsic value of expression was reduced to the extrinsic value of profit, and how culture weakened and pacified rather than stimulated and fortified the mind.
During the 1930s and 1940s there were lively debates among Adorno, Lukács, Benjamin, Bertolt Brecht and others on whether art could still be a vehicle of criticism, education, and change; if so, the question shifted to which aesthetic forms or styles were best suited to this purpose. Whereas Benjamin (1969) analyzed how the art work has lost its aura in “conditions of mass production and reproduction,” but argued that mass media had the potential to democratize culture and promote critical thinking, Adorno thought this process spelled the collapse of critical distance and the cooptation of oppositional politics – a key concern of later postmodernists (see below). Doubting the effectiveness of realism or overtly political art such as Lukács and Brecht promoted, Adorno argued in favor of radical modernist and avant-garde styles, such as novels of Franz Kafka or the plays of Samuel Beckett, which he believed alone could provoke critical consciousness.
But this last-ditch hope too was dashed with the implosion of “high” and “low” art and the commodification ad cooptation of modernism itself. By the 1950s, the cubist prostitutes of Picasso and the starry nights of Van Gogh fetched tens of millions of dollars on the burgeoning art market, the works of Kafka and Beckett were standard university seminar fare, the anti-art gestures of Dadaism were institutionalized within museum parlors, and the jarring images of surrealism served the ends of advertising.
Amidst these conditions, Marcuse (1974, 2006) depicted Western capitalist societies as totally administrated systems populated by one-dimensional conformists. By spreading cultural narcotics and binding desire to consumption, capitalism had succeeded in bringing about a "socially engineered arrest of consciousness." In the 1960s, however, with the emergence of “new social movements” (e.g., Blacks, youth, women, peace, and anti-nuclear groups) Marcuse (1971) gained renewed hope for social revolution via a “Great Refusal” of capitalism. In the spirit of Western Marxism, Marcuse emphasized the need to change the subjective conditions of life (e.g., needs, desires, sensibilities, and the imagination) as much as the objective conditions of society (e.g., economics, politics, and law). He thereby advanced a cultural politics that emphasized the crucial role that critical and oppositional art could play in individual and social transformation.
By this time, the Frankfurt School had shaped a broad and fertile field of Marxist-oriented cultural studies, or simply “Cultural Marxism.” One important offshoot of this development was British Cultural Studies. Beginning in the 1950’s, theorists such as Raymond Williams, Richard Hoggart, and E.P. Thompson analyzed the significance of working-class cultures in Britain and the negative effects of mass culture. In 1964, Hoggart and Stuart Hall founded the “Birmingham School” of cultural studies. Like the Frankfurt School, Birmingham theorists employed an interdisciplinary approach to study the ideological effects of mass culture and communications. Unlike the Frankfurt School, however, the Birmingham Centre emphasized not only capitalist domination, but also widespread resistance to oppression. Hebdidge (1979), for instance, explored how subcultures subverted social codes to generate their own meaning and symbols, as Hall (1980) – a pioneer of “reception theory” – analyzed how people actively “decoded” signs and messages “encoded” in cultural “texts” (e.g., films, fashion, paintings, television programs).
Whereas Frankfurt theorists (with exceptions such as Benjamin) dichotomized high and low culture, largely ignored popular culture except to treat it as capitalist ideology, and Adorno focused on the critical potential of the avant-garde, British theorists studied popular culture and emphasized the dialectic of domination and resistance. The Frankfurt School abandoned hope for the working class as a source of emancipatory change, as British cultural studies valorized youth and workers for their ability to resist ideological power and to create their own style and identities. But if the Frankfurt School focused on political economy and “hegemony” at the expense of lived experience, active subversion of the dominant culture, and “counter-hegemony,” British cultural studies went too far in abstracting culture from political economy and exaggerated the significance of “resistance” – a marked feature of contemporary culture studies (Kellner 1997). If the Frankfurt School focused on the avant-garde at the expense of popular culture, British cultural studies concentrated on popular culture without engaging the political possibilities of avant-garde art (see Adamson 2007).
In addition to Germany, the US, and England, there were crucial developments in France, where numerous sociologists and philosophers attempted to mediate determinist or functionalist views of social institutions (that over-emphasized the determinant power of “structure”) and idealist or volunteerist concepts of culture and subjectivity (that exaggerated the role of “agency”). Pierre Bourdieu (1977) stressed the active role of subjects in the production and reproduction of the rules, habits, and dispositions of their lives; Michel de Certeau (1974) analyzed how individuals appropriate and subvert mass culture through “tactics of consumption” to claim their autonomy from social forces; and Henri Lefebvre (1971, 1992) engaged the impoverishment of daily existence in capitalism and broadened Marxist theory into analyses of the city, the urbanization of society, and the politics of social space in general. Guy Debord (1976) and the Situationist International theorized how consumer capitalism, mass media and entertainment, and the proliferation of images and signs generated a “society of the spectacle” that pacified individuals, such as Jean Baudrillard (1983) argued led to a “hyperreality” that blurred the boundaries between illusion and reality. But whereas Debord looked to the capitalist social relations obscured by the fetishized appearances of commodity-images, Baudrillard claimed reality was irrecoverably lost. If Debord and the Situationists posited the “constructed situation” as the antidote to the spectacle, using experiments in radical cultural politics to reawaken revolutionary agency, Baudrillard proclaimed the triumph of objects over subjects, the demise of revolutionary dreams, and the “end of history” in spent social conditions where nothing new could emerge and one can only “play with the pieces” of the past.
Baudrillard exemplified the jaded “postmodern condition” (Lyotard 1984) premised on the “suspicion” of “metanarratives” – whether Christianity, Hegelianism, Marxism, or Bourgeois Progressivism – that view history as the realization of Freedom or Progress. Indeed, by 1960, there was already a widespread sense within the art world that modernism was over, that it had exhausted itself and done all that could be done (Best and Kellner 1991, 1997). A “new sensibility” (Irving Howe) emerged in criticism and the arts that expressed dissatisfaction with modernism. Seen as stale, boring, pretentious, elitist, and alienating, European and American high modernism were rejected in favor of new attitudes and styles. The new postmodern sensibilities and aesthetic forms spread like wildfire, erupting in the novels of William Burroughs and John Barth, the music of John Cage, the pop-art paintings of Andy Warhol and Robert Rauschenberg, the architecture of Robert Venturi and Philip Johnson, as well as dance, film, photography, and the creation of new forms such as happenings, performance art, multi-media installations, and computer art.
The postmodern turn in the arts maintained some links to earlier aesthetic traditions while also breaking in key ways from bourgeois elitism, high modernism, and the avant-garde. Like modernism and the avant-garde, postmodernists reject realism, mimesis, and linear forms of narrative. But while modernists championed the autonomy of art and excoriated mass culture as bland gruel for a crude majority, postmodernists rejected elitism and embraced the implosion of “high” and “low” cultural forms in an affirmative pluralism and populism. Rather than snobbishly dismiss popular culture, postmodernists embraced it and assimilated its images and influences into their work. While modernists attempted to create monumental works and to forge a unique style, and avant-garde movements wanted to revolutionize art and society, many postmodernists were ironic, playful, and apolitical, eschewing concepts like genius, creativity, and even the author. While modernist works produced a wealth of complex meanings and interpretations, postmodern art was surface-oriented and renounced the attempt to produce and locate “deep meanings.” As evident in postmodern architecture, the quest for stylistic purity and minimalism gave way to eclecticism, such that the postmodern artist – as if to confirm Baudrillard’s eulogies for modernism – playfully and ironically played with past styles and forms.
In his seminal essay, “Postmodernism, or the Cultural Logic of Late Capitalism,” Marxist literary critic Fredric Jameson (1984 and 1991) vividly describes the panoply of new attitudes, experiences, and cultural forms sweeping throughout American and European society. Among a many characteristics of postmodernism, Jameson singles out as especially important “a new depthlessness, which finds its prolongation both in contemporary ‘theory’ and in a whole new culture of the image or the simulacrum” (1991: 6). Akin to the “rhizomatic” analyses of Gilles Deleuze and Felix Guattari (1983), Jameson notes how postmodern culture ruptures narrative and decenters subjectivity in a “schizophrenic” dispersal of fragments. Individuals are overloaded with information, images, and the complexities of a vertiginous “hyperspace” that disables their ability to situate themselves within larger systems of meaning, thus demanding a new “cognitive mapping” of contemporary subjective, cultural, social, political, and economic conditions.
Although Jameson interprets postmodernism as the new “cultural dominant” that supersedes modernist forms and philosophies, his concept was less a stylistic marker than a periodizing device marking a new stage in the development of capitalism. Rejecting idealist approaches, Jameson relates changes in the cultural “superstructure” to shifts in the economic base, and thus interprets postmodernism as the “cultural logic of late capitalism.” Jameson reasserts the importance – indeed, primacy – of Marxism at the very moment others proclaimed its death (Baudrillard 1983) or attacked its “metanarrative” of history (Lyotard 1984). Postmodern culture, for Jameson, emerged as a product of a post- war society dominated by consumerism, mass media, images, advertising, information, computers, and the total commodification of life in a global capitalist market system. Indeed, because postmodernism is so intertwined with mass culture, media society, and capitalist markets, Jameson argues that the “critical distance” between culture and economics, the outsider and the insider, has been “abolished -- an attitude voiced by many postmodern theorists and artists who saw no escaping the gravitational orbit of capitalist cooptation.
While such pessimistic discourses bear the marks of defeat in the aftermath of the 1960s (Best and Kellner 1991), postmodernism is not a monolithic discourse, for along with the ludic artwork of Warhol or the nihilism of Baudrillard there were positive and political forms of postmodern art, theory, and politics that incorporated progressive elements of the 1960s. Thus, in addition to an apolitical, self-indulgent, or defeatist “postmodernism of reaction,” Hal Foster (1983) identified a competing “postmodernism of resistance,” such as one finds in the novels of Thomas Pynchon, the photography of Cindy Sherman and Barbara Kruger, and the postructuralist-inspired “radical democracy” of Ernesto Laclau and Chantal Mouffe (1985).
Political postmodernism is also expressed in various forms of “identity politics” and “multiculturalism.” In the transition from the “new social movements” of the 1960s to the identity politics of the 1980s, any semblance of unity or common vision fractured once women, people of color, gays and lesbians and others focused on their own “subject positions” as oppressed or underprivileged groups. Identity politics turned to the distinct history, culture, and consciousness of marginalized groups, who sought to avoid losing uniqueness to either the “melting pot” of US culture or the acid bath of Marxist politics that reduced all forms of oppression to class struggle. Many proponents of identity politics identified themselves as postmodernists and thus -- congruent with the postmodern theories of Lyotard, Michel Foucault, Jacques Derrida, Julia Kristeva, Richard Rorty, and others -- valorized difference over unity, with different groups pursuing their own single-issue, reformist politics. In the 1990s, however, new “anti-“ or “alterglobalization” movements rejected this approach to form new kinds of alliances – such as between North and South and labor and environmental groups – essential to fight the growing power of transnational capitalism (see Brecher et. al. 2000).
Another form of the postmodern politics of difference championed “multiculturalism” in university studies and throughout society as a whole, thereby promoting greater diversity and equality. Rather than seeing multiculturalism as a call for inclusion, however, conservatives denounced it as a corrosive relativism and subversive attack on the timeless norms, eternal truths, and hallowed academic canon (e.g., the “Great Books” programs centered on the ideas of dead, white, western males) of Western culture. This set off a new round of “culture wars” in which conservative academics, media commentators, and fundamentalist Christians demonized liberalism (conflated with Leftism) as the cause of every form of social “decline” and went to battle to preserve their beloved traditions and social status.
As multiculturalism spread throughout academia, so too did “cultural studies” in the form of books, articles, conferences, and department programs dedicated to analyzing the profound social influence of advertising, images, mass media, and popular culture (see Grossberg et. al., 1992, Kellner 1995). Work done under this rubric has been incredibly diverse and fecund, including a variety of feminisms, gay and lesbian studies, and queer theory; projects for critical pedagogy (Giroux 1988, McLaren 2006) and critical media literacy (Kellner 1998); sociological studies of “McDonaldization” and the “globalization of nothing” dynamics rooted in the spread of industrialization and bureaucratization logics (Ritzer 2003, 2004); science and technology studies (Best and Kellner 2001); and cyberstudies (Gray 1995) and animal studies (Baker 2000, Wolfe 2003).
As culture becomes more pervasive throughout everyday life, the task of developing a critical analysis of its influence is increasingly urgent. The richest approaches to cultural studies will absorb the best elements of prior traditions and avoid their flaws and limitations. Such a perspective would, for instance, retain the Frankfurt School’s contextualization of culture within capitalist social relations, and eschew the tendency of many Birmingham and postmodern theorists to sever culture and economy. Conversely, it would reject the Frankfurt School’s outmoded dichotomy between high and low culture and recognize their implosion in a unified field dominated by capitalist imperatives. Also, it would break with the deterministic tendencies of Frankfurt School and postmodern theorists in favor of complex descriptions of how individuals are both shaped by and in turn shape culture, signs, and ideology. It would analyze the subtleties of resistance without exaggerating their significance and occluding the need for large scale social transformation. It would be multiperspectival in its facility to use different theoretical orientations (e.g., Marxism, feminism, race theory, queer studies, and animal rights), to draw on a wide range of texts (be they architecture, books, film, television, or the Internet), to analyze a broad array of identity positions (including not only class but also sexuality, race, gender, nationality, and species), and illuminate the various ways in which cultural texts are encoded and decoded, produced and consumed (Kellner 2007).
At its best, cultural studies is not an esoteric academic exercise, but rather part of a critical pedagogy that teaches individuals how to interpret and decode the media representations that so powerfully shape their consciousness, identities, and lives. Critical cultural studies teaches skepticism to authority, logical reasoning, value thinking, and the importance of our roles as citizens not consumers. Critical cultural studies can “empower people to gain sovereignty over their culture and to be able to struggle for alternative cultures and political change. [It] is thus not just another academic fad, but can be part of a struggle for a better society and a better life” (Kellner 2007).
References:
DR.STEVE BEST
Areas of Study - Communication Studies
Areas of Study in the Department of Communication Studies, including Interpersonal ... psychological categories, and behavioral science research designs). ...
commstudies.utexas.edu/areas/index.html
by GP Radford - Cited by 2 - Related articles
The scientific study of communication phenomena is justified in terms of ...... In J. B. Cohen (Ed.), Behavioral Science Foundations of Consumer Behavior. ...
www.theprofessors.net/sublim.html
by F Zandpour - Related articles
Most communications and media studies programs however do not require .... College of Communications and Tony Rimmer is Director of the Faculty Development ...
www.pantaneto.co.uk/issue30/zandpourrimmer.htm
Department of Languages and Mass Communication, KU
Two Bachelor level programs in Social Work and Development Studies are running at ... Students of Media Studies are exposed to various mass communication theories ... achievements in the field of mass media and media support services. ...
www.ku.edu.np/media/
Middle Tennessee State University, Master of Science in Professional Science ... of my academic background, I decided that I wanted an internship within the ... work and consulting practicum without making the theory the end-all of study. ... for PhD students working on their dissertations and clients from outside ...
www.sciencemasters.com/.../Students/.../tabid/.../Default.aspx
11 May 1997 ... To inspire your workers into higher levels of teamwork, ... You lead through two-way communication. Much of it is nonverbal. ..... This personality defines the roles, relationships, rewards, ... The collective vision and common folklore that define the institution are a reflection of culture. ...
www.nwlink.com/~donclark/leader/leadcon.html
Global Communication for Justice, a National Council of Churches ...
It is a precondition of a just and democratic society. ..... I. Work with institutions of higher education, particularly communication and ... Communication for Justice · The Churches' Role in Media Education and Communication Advocacy ...
www.religion-online.org/showarticle.asp?title=281
Pathways to Innovation in Digital Culture
by M Century - Cited by 21 - Related articles
While critical of mandarin intellectual disdain for mass media, surprisingly Paik .... voices who over the ensuing decades have developed a paradoxically "post-humanist" ..... a vast synthesis of current political, social, economic and cultural .... From a mediaeval-age location in central Amsterdam, it inverts the ...
www.nextcentury.ca/PI/PImain.html
Education and the Common Good: A Moral Philosophy of the Curriculum
The mass media of communication must be of crucial significance for democracy. ..... instruction and in influencing students' habits of seeing and listening. ... Even though the official media of communication may in principle be ...
On the teaching of the spirit of dialogue as a teaching ...
12 Jun 2008 ... Mutual understanding of the teaching of the main question that is ... "encounter" and the communication process, the two sides reached a ...
www.hi138.com/e/?i112103
by ME Kirsh - Related articles
14 Mar 2008 ... With respect to the philosophy of Berkeley, his communications are none other than ... but considering consciousness , the sense of self, ... if they have no apparent practical meaning, can they also yield tangible coherent ... no coherent meaning to the practicalities of life and survival of which ...
arvinekirsh.com/incoherent_coherency.aspx
No comments:
Post a Comment