Tuesday, November 17, 2015

 Hello Students!
Welcome again to the new 2015/16 academic year. Hope you are all in good health. As usual, this academic blog brings together all students studying media and Communication to share, exchange and discuss issues pertaining media industries, content, messages and the audiences. Through this blog you will be able to post and access diverse media and related materials ranging from assignments, lectures and presentations to news and information. I hope you will find this medium useful in your search for knowledge.
In this semester, though, the blog will focus much on three courses: Media Ethics, Audience research, and Health Communication
I ALSO encourage every student to register as contributor (student-journalist) with our Hill Observer newspaper. Send your newsworthy stories telling what is happening in your university and/or place where you live and earn Tsh. 10,000 in cash! This is your opportunity to gain experience in news-stories writing.
Karibuni sana.
AB.

Saturday, June 20, 2015


RESEARCH IN ADVERTISING & ADVERTISEMENT
  • Initially, research was not part of advertising & PR as decisions were made basing on intuition and experience.
  • Research became a decision making tool to the managers after an increased competition, diverse markets and rising costs.
  • Through research PR & Advertising specialists gathered information about product packaging, effective media vehicle for advert, etc.
PART 1: Advertisement Research
There are three major areas of research: Copy testing; Media research; Campaign assessment studies
  1. Copy Testing
  • Research in public relations is largely conducted on Copy testing.
  • Copy testing refers to research conducted to develop effective advertisements and then determines which of several advertisements is the most effective.
  • Copy testing determines an ad’s effectiveness based on consumer responses, feedback, and behaviour.
  • The study is done in each stage of the advertising process i.e. before and after a campaign starts (copy pre-testing & post-test) which indicates what to stress and what to avoid.
  • Having established the content of an ad, tests must be performed to ascertain the most effective way to structure these ideas.
  • The copy might be tested for readability and recall. The aim is to determine whether the variable tested significantly affects the liking or the recall of the advert.
Basic features of Copy testing:-
  • Copy testing provides multiple measurements to assess the performance of an advertisement;
  • It relies on human response to communications – the reception of a stimulus, the comprehension of the stimulus, and the response to the stimulus.
  • It allows for consideration of whether the advertising stimulus should be exposed more than once.
  • It recognizes that the more finished a piece of copy is, the more soundly it can be evaluated and requires, as a minimum, that alternative executions be tested in the same degree of finish.
Approaches to Copy Testing
  • There is various research approaches used in Copy testing. The approaches often focus on layout, design, color, narration (voice-over), music, illustration, size, length, etc.
  • The approaches used in copy testing research can be explained by considering the 3 dimensions: Cognitive (knowing), affective (feeling), and conative (doing) dimensions (Leckenby and Wedding, 1982).
  1. The cognitive approach
  • Involves studies about attention, awareness, exposure, recognition, comprehension, and recall (unaided and aided) of advertising.
The researcher investigates the extent to which people (consumers) know about a product, service, concept, or phenomenon after being exposed to advertising messages.
Methodologies: focus groups, observations/ physiological studies (eye movement) and consumer panels are usually used for information/data gathering.
In some cases, the research involves a pre-test and post-test (measurements are taken before and after exposure to the advertising), while other research involves post-test measurements only.
  1. Affective approach
  • The researcher under this dimension study consumers’ attitudes toward a particular product or service to see if (or how) they have changed because of exposure to an advertisement or an advertising campaign.
  • Methodologies: researcher use variety of methods to collect information: focus groups, telephone interviews, central location testing (large groups in an auditorium setting), and a variety of physiological measurements.
  • Affective dimension is important because the degree of liking expressed by consumers toward a commercial is much related to the awareness, recall, and greater persuasive impact (Walker & Dubitsky, 1994).
  • Liking an advert has been one among the essential factors in determining its impact (Wimmer & Dominick, 2000).
3. Conative Approach:
  • Deals with actual consumer behavior: buying pre-disposition (intent to purchase) and actual purchasing behavior.
  1. Buying predisposition research:
  • Researcher asks consumers about their probability of purchasing a product or service presented in an advertisement or campaign.
  1. Purchasing Research:
  • In this study the actual sales are tracked after consumers are exposed to advertising.
2. Media Research
Revolves around 3 main areas: audience size & composition; efficiency of advertising exposure; and advertising activities of competitors.
  • Reach studies – studies of the size and composition of audience of a particular medium/media
  • Reach & Frequency studies – studies on efficiency of advertising exposures given by mixture of variety of media
  • Studies of the advertising activities of the competitors
  1. Audience size & composition
  • Audience studies help advertisers to get accurate information about the demographic characteristics of audience of a particular mass medium.
  • The audience information justify why advertisers should inject huge amount of cash to pay for airtime and space in the media.
  • In print media, Audience size of newspaper is measured interms of the number of copies distributed by issue. This ‘number’ or newspaper circulation includes all copies distributed to subscribers, newsstands, and from other sellers.
  • The fact that newspapers’ advertising rate is determined by its circulation, the papers have developed own standardized method of measuring circulation.
Circulation figures are employed in computing the CPMs (Cost per thousands) of various newspapers. Example,
  • Nipashe n/paper which has circulation of 16,000 a day charges 1,600,000/= for 1 ad;
  • Mwananchi n/paper which has circulation of 28,000 a day charges 2,000,000/= for 1 ad
To determine the advertising efficiency from advert cost & circulation the following data calculations are performed:-

Nipashe
Mwananchi
Advert cost:
1,600,000/-
2,000,000/-
Circulation:
16,000
28,000
Cost per 1000
Circulated copies
1,600,000/- = tsh. 100/-
16,000
2,000,000/- = tsh. 71/-
28,000
Thus, Mwananchi is a more efficient advertising vehicle than Nipashe.
Techniques of determining advertising efficiency:
  • Unaided recall – involves soliciting information from respondents on who have read newspaper in the past month. The researcher aims to verify whether readers remember which newspaper and content they read.
  • Aided recall – aims at knowing people who read newspapers recently. Respondents are required to remember seeing/reading recent copies by describing e.g. front page and recall any story they have read in a given newspapers.
  • Recognition – aims to find out which newspaper the reader recognize most to have read a particular story/ad/etc. Entails showing a reader a logo/front page/cover page of a publication
  • Media efficiency – this is number of times a person reads each issue of newspaper/magazine. A paper which has its issues frequently read by most people tends to be more efficient advertising vehicle because it provides possible exposures to advertisement for the same cost as newspaper.
  • Audience composition – is the most important gauge of advertising efficiency. Here advertisers conduct a survey to determine certain demographic characteristics of the people who tend to buy a particular product. Example, the demographic characteristics of potential customer of Safari Lager beer could be: males between 45 – 65, educated, retired, urban dwellers, etc. This demographic information is then compared with the characteristics of a newspaper audience for the product.
  • To determine the audience size and composition of electronic media refer to Ratings and Non rating studies.
Reach & Frequency:
Overall, Media Research involves reach and frequency.
  • Reach: the total number of households that will be exposed to a message of a particular medium at least once over a certain period (usu. 4 weeks).
  • It is a cumulative audience, expressed as percentage of the total universe of households that have been exposed to a media message. Example, if 25 out of 100 households are exposed to media message, then reach is 25%.
  • Frequency: the number of exposures to the same message that each household receives.
  • The fact that it is impossible for every household to be exposed to media message, advertisers prefer to use average frequency of exposure calculated in this formula:-
Total exposures for all households = Average frequency
Reach
Example: T/household 400 = 16 average frequency
Reach 25
Therefore, the average household was exposed 16 times
3. Campaign Assessment Research
  • Builds on Copy testing and Media research.
  • There are 2 types of Campaign Assessment studies: Pre-test/Post-test and Tracking studies
  1. ­­Pre-test/Post-test studies
  • Takes the measurement before and after the campaign
  • Use personal interviews for data collection.
  • Same people may be interviewed before and after the commencement of a campaign.
  • The measures before and after the campaign intend to gauge the effects of advertising. Example: opinion polls
  1. Tracking studies
  • Assess the impact of the campaign by measuring the effects at several times during the progress of a campaign.
  • Tracking studies provide continuous feedback to the advertiser while campaign is progressing.
  • The feedback may lead to changes in the creative strategy or the media strategy
  • Rely on personal interviews for data collection
  • Though it’s expensive, the study smoothes out the effects of short term factors like poor weather or bad publicity
PART: Public Relations Research
PR has become more research-centred field of study in recent years. It encompasses several research techniques such as surveys, content analysis, and focus groups.
Types of PR Research:
  1. Applied research – examines specific practical issues.
  • Is conducted through strategic research to develop, say, PR campaigns and programs.
  • Strategic research provides a rationale on where the company want to be in future, and how to get there.
  • Also is conducted through Evaluation research to assess the effectiveness of a PR campaign/program.
  1. Basic research – examines the underlying processes and in constructing theories that explain the PR process.
  2. Introspective research – examines the field of PR
Areas of Research in Public Relations
There are several areas of research in PR field such as: Environmental monitoring programs; PR audits; Evaluation research; Gatekeeping research.
  1. Environmental Monitoring programs
  • Studies done to observe trends in public opinions and social events that might have a significant impact on the company.
  • The monitoring is done to look for trigger event i.e. an event/activity that might focus/cause public concern on a topic or issue, e.g. the collapse of 16 blocks building in Dar.
Two phases are involved:-
Phase 1: attempts to identify early warnings – i.e. emerging issues.
  • Takes the form of content analysis of publications.
  • Also uses the Panel studies of community leaders, influential and knowledgeable people, to get the ideas they perceive as important.
Phase 2: intend to track public opinion on major issues.
  • Involves a longitudinal panel study where respondents are interviewed several times during a specified interval.
  • Also takes a form of cross-sectional opinion poll in which a random samples in surveyed only once.
  1. Public Relations Audits
  • Study on public relations position of an organization
  • Measure a company’s standing both internally (in the eyes of its staff) or externally (opinions of customers and stakeholders)
  • It is a research tool which describes measures and assesses an organization’s pr activities and provides guidelines for future pr programming – Simon (1986).
It involves several steps, among them are:-
  • Listing the segments of the public that are most important to the organization. This is achieved through interviews with management members, and doing content analysis of organization’s external communication
  • Determining how the organization is viewed by each of its audience/public. Involves conducting corporate image study i.e. a survey of audience samples in order to measure their familiarity of the company (using logos/brand – product name)
  1. Communication audit
Concerns with external and internal means of communication used by an organization
Techniques used:
  • Conducting readership surveys in order to measure how people read company’s publications – staff newsletter, annual reports, leaflets, etc. and remember the message they contain. The results are used to improve the content, appearance, and method of distribution
  • Content analysis reveal how the media are handling the news and other information about an organization.
  • Readability studies intend to assist company to gauge the ease with which its staff publications and press releases can be read
  1. Social Audits
Social audit is designed to measure the company’s social performance, i.e. how well it it living up to its social/public responsibilities.
It provides a feedback company-sponsored social activities/programs example, environmental cleanup.
  1. Evaluation Studies
Evaluation research involves processes of judging the effectiveness of a program planning, implementation and impact.
  1. In planning, the researcher looks at the extent of the target program; the costs implications as to the desired benefits.
  • Content analysis is used to determine how closely is the program efforts coincide with the actual plan.
  • Readability test determine whether the message can be read and understood.
  1. In implementation, the study investigate whether the program reach the targeted population or area.
  • Content analysis is used to count number of messages that are placed in the media.
  • Audience study is done to determine the number of people exposed to the advertising message
  1. In impact, the focus is on the effectiveness of the program to achieve the set goal, as well as whether the program has unintended effects.
Involves the three levels of effect: cognitive, affective, and conative levels
  • Cognitive level – find out how much people learned from the pr campaign
  • Affective level – measures the change in attitudes, opinions, or perceptions
  • Conative level – behavioural change used to gauge pr impact
  1. Gate-keeping studies
Technique analyzes the characteristics press releases and video releases that allow them to pass through the gate and appear in the media.
Examine both the content and variables
Things to consider:
  • What style of news releases get placement in the media;
  • What news release content mostly preferred by the media? Local vs. International facts & figure
  • What type of artwork is being preferred by media? E.g. Use of photos from nearby localities
  • What type of grammar and syntax of news release is preferred by the given media?
  • What is the size of news release?
RESEARCH DESIGN
  • Research design is a master plan specifying the methods and procedures for collecting and analyzing the needed information;
  • It is a framework or the blueprint that plans the action for research project.
  • The objectives of the study determined during the early stages of the research are included in the design to ensure that the information collected is appropriate for solving the problem;
  • The researcher must specify the sources of information, and the research method or technique (survey or experiment, for example) to be followed in the study
  • Given this, the length and complexity of research designs can vary considerably, but any sound design will do the following things:
  • Identify the research problem clearly and justify its selection,
  • Name the study design –i.e. case study, experimental, cross-sectional, etc.
  • Review previously published literature associated with the problem area,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem selected,
  • Who constitutes the study population and sample selection;
  • Effectively describe the types of data and the methods of collecting them;
  • Describe the methods of analysis to be applied to the data;
  • How to ensure data validity and reliability;
  • How will ethical issues be taken care of
  • Research designs can be categorized in variety of ways, though broadly there are four basic study designs for descriptive and causal research: case study, cross-sectional, longitudinal, experimental, and descriptive designs.
  • Each study design has been categorized according to:
  • Number of contacts with the study population (cross-sectional; longitudinal; pre-test & post-test designs);
  • The reference period of the study (retrospective; prospective; retrospective-prospective study designs);
  • The nature of the investigation (experimental; case study; descriptive designs
Case study design:
  • A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey;
  • It is often used to narrow down a very broad field of research into one or a few easily researchable examples;
  • The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world;
  • It is a useful design when not much is known about a phenomenon.
Cross-sectional study design:
  • Cross-sectional research designs have three distinctive features: no time dimension, a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation.
  • The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than change.
  • Researchers using this design can only employ a relative passive approach to making causal inferences based on findings.
  • Cross-sectional designs generally use survey techniques to collect the data; they’re, therefore, relatively inexpensive and take up little time to conduct.
Longitudinal study design
  • A study design that follows the same sample over time and makes repeated observations, e.g. the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur;
  • Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships;
  • Measurements are taken on each variable over two or more distinct time periods;
  • This allows the researcher to measure change in variables over time;
  • It is a type of observational study and is sometimes referred to as a panel study.
  • Longitudinal data allow the analysis of duration of a particular phenomenon;
  • The design enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments;
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
Experimental study design:
  • Is a blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment;
  • Mass media researcher attempts to determine or predict what may occur;
  • Experimental Research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great.
  • The experimental design specifies an experimental group and a control group;
  • The independent variable is administered to the experimental group, and both groups are measured on the same dependent variable.
  • This design allows the media researcher to control the situation to be able to answer the question: “what causes something to occur?”
  • The study design permits the researcher to identify cause and effects relationships between variables and to distinguish placebo effects from treatment effects.
Descriptive study design:
  • A study design which provide answers to the questions of who, what, when, where, and how associated with a particular research problem;
  • However, this design study cannot conclusively ascertain answers to why.
  • The design is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.
  • The respondents under this design are being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject;
  • The design is often used as a pre-cursor to more quantitatively research, the general overview giving some valuable pointers as to what variables are worth testing quantitatively;
  • The design allows for the collection of a large amount of data for detailed analysis.
MEASUREMENTS IN RESEARCH
  • In everyday usage, measurement occurs when an established yardstick verifies the height, weight, or another feature of a physical object;
  • to measure is to discover the extent, dimensions, quantity, or capacity of something, especially by comparison with a standard.
  • Certain things lend themselves to easy measurement through the use of appropriate instruments, e.g. media content, height and weight, magnitude. This involves statistical measurements (quantitative);
  • There are situation/phenomenon/issue which requires people’s subjective feelings, attitudes, ideology, deviance, and perceptions;
  • Both qualitative and quantitative researchers use careful, systematic methods to gather quality data. Yet, differences in the styles of research and the types of data means that they approach the measurement process differently.
Qualitative measurements
  • Qualitative researchers use wider variety of techniques to measure and create new measures while collecting data;
  • Measurement for qualitative researchers occurs in the data collection process, and only a little occurs in a separate, planning stage prior to data gathering;
  • Data for qualitative researchers sometimes is in the form of numbers; more often it includes written or spoken word, actions, sounds, symbols, physical objects, or visual images;
  • The qualitative researcher does not convert all observations into a single, common medium such as numbers. Instead he or she develops many flexible, ongoing processes to measure that leaves the data in various shapes, sizes, and forms;
  • Qualitative researchers also reflect on ideas before data collection, but they develop many, if not most, of their concepts during data collection activities;
  • Researchers start gathering data and creating ways to measure based what they encounter;
  • As they gather data, they reflect on the process and develop new ideas. The ideas give them direction and suggest new ways to measure;
  • Researcher relies primarily on words as its unit of analysis and its means of understanding.
  • However, it can also use voice tone, loudness, cries, sighs, laughs, and many other ways of human communication (face to face & in-depth interviews, focus group interviews)
  • Qualitative research tends to be small scale, simply because it is hugely labour intensive;
  • For example, interviews or focus groups will usually need to be transcribed before they can be analyzed.
  • Researcher is often more involved with the respondents, and so it is sometimes helpful for others to conduct the analysis;
  • Qualitative methods range from the classification of themes and interconnections, content analysis, grounded theory and discourse analysis, and reliability and validity are just as important as they are in quantitative analyses.
  • Quantitative measurements
  • Measurements rely primarily on numbers as the main unit of analysis;
  • Researchers extensively think about variables and convert them into specific actions during a planning stage that occurs before and separate from gathering or analyzing data.
  • Researchers want to develop techniques that can produce quantitative data (i.e. data in the form of numbers).
  • The researcher, therefore, moves from abstract ideas, or variables, to specific data collection techniques to precise numerical information produced by the techniques.
  • The numerical information is an empirical representation of the abstract ideas.
  • Quantitative researchers contemplate and reflection concepts before they gather data;
  • They construct measurement techniques that bridge concepts and data;
  • The measurement techniques define what the data will be and are directions for gathering data.
  • Though quantitative methods, such as surveys, are used in media research, the vast majority of research is relatively small scale, intensive, focused on change and involves human perceptions;
  • One of the most common instruments to gather numerical data is the questionnaire survey, using a series of closed questions to which responses are given;
  • Involves large sample of respondents: large amounts of data can be gathered from a wide number of people and the results can be analyzed by computer-aided program e.g. SPSS
It is important to Note that;
  • All researchers combine ideas and data to analyze the social world.
  • In both research styles, data are empirical representation of concepts, and measurement is a process that links data to concepts.
Criteria for good Measurement
  • It is important for a researcher to ensure that the research instruments developed to measure a particular concept is indeed accurately measuring the variable, and in fact, researcher measures the concept that he/she set out to measure;
  • The use of better instruments ensures that more accuracy in results enhances the scientific quality of the research. Thus, in some way, researcher needs to assess the “goodness” of the measurement developed.
  • Then, what should be the characteristics of a good measurement? There are two major criteria for evaluating a measurement tool: validity and reliability.
Validity:
  • Validity is the ability of an instrument (for example measuring an attitude) to measure what it is supposed to measure.
  1. When mass media researchers ask a set of questions (i.e. develop a measuring instrument) with the hope that they are tapping the concept, how can they be reasonably certain that they are indeed measuring the concept they set out to do and not something else? There is no quick answer.
  2. Researchers have attempted to assess validity in different ways, including asking questions such as:
  • Is there consensus among my colleagues that my attitude scale measures what it is supposed to measure?”
  • Does my measure correlate with others’ measures of the ‘same’ concept?”
  • Does the behavior expected from my measure predict the actual observed behavior?”
  • Media researchers expect the answers to provide some evidence of a measure’s validity. What is relevant depends on the nature of the research problem and the researcher’s judgment.
Reliability:
  • The reliability of a measure indicates the extent to which it is without bias (error free) and hence ensures consistent measurement across time and across the various items in the instrument.
  • It is an indication of the stability and consistency with which the instrument measures the concept/variable and helps to assess the ‘goodness” of measure.
  • In mass communication reliability is measured by:-
  • Test-retest method of determining reliability involves administering the same scale to the same respondents at two separate times to test for stability. If the measure is stable over time, the test, administered under the same conditions each time, should obtain similar results.
  • Inter-item method is a test of consistency of respondents’ answers to all the items in a measure. To the degree that items are independent measures of the same concept, they will be correlated with one another.
DATA COLLECTION AND DATA PROCESSING
  • Data collection is integral part of the research design, though we are dealing it separately.
  • Data collection is determined by the research technique selected for the study;
  • Data can be collected in a variety of ways, in different settings (research areas/sites) and from different sources (audiences/consumers).
  • Depending on the nature of the problem, the research design and study approach employed, data can be collected through:-
  • Interviews – face to face interviews, telephone interviews, computer-assisted interviews, and interviews through electronic media (mainly for qualitative studies, though can be used in quantitative studies);
  • questionnaires that either personally administered, sent through mail, or electronically administered; (mainly for quantitative studies, though can be used in qualitative research)
  • observation of individuals and events which could be participant or non participant (mainly for qualitative studies);
  • other methods includes, In-depth interviews and focus group discussions (mainly for qualitative studies) and documentary review (for both quantitative & qualitative studies).
  1. After data collection exercise has been completed, researcher process the data through converting them into a format that will answer the research questions and or help testing the hypotheses
  2. Data processing generally begins with the editing and coding of thee data.
  • Editing involves checking the data collection forms for omissions, legibility, and consistency in classification;
  • The editing process corrects problems such as interviewer errors prior to the data are transferred to a computer;
  • Coding may be the assigning of numbers or symbols before it goes to the computer. The computer can help in making tables and the application of different statistics;
  • The researcher then analyzes the processed data by assigning meaning to them (data).
  • Analysis is the application of reasoning to understand and interpret the data that have been collected.
  • The appropriate analytical technique is to be determined by the research design, and the nature of the data collected.
DATA ANALYSIS & INTERPRETION
  • Data analysis and interpretation is the process of assigning meaning to the collected information and determining the conclusions, significance, and implications of the findings.
  • The steps involved in data analysis are a function of the type of data/information gathered.
  • However, the purpose of the assessment and the assessment questions provide a structure for the organization of the data and a focus for the analysis.
Analyzing and Interpreting Quantitative data:
  • The analysis of numerical (quantitative) data is represented in mathematical terms. The most common statistical terms include:
  • Standard deviation – The standard deviation represents the distribution of the responses around the mean.
  • It indicates the degree of consistency among the responses.
  • It indicates the degree of consistency among the responses.
  • Frequency distribution – Frequency distribution indicates the frequency of each response. For example, if respondents answer a question using an agree/disagree scale, the percentage of respondents who selected each response on the scale would be indicated;
  • The frequency distribution provides additional information beyond the mean, since it allows for examining the level of consensus among the data.
Analyzing and Interpreting Qualitative data
  • The analysis of qualitative data is conducted by organizing the data into common themes or categories.
  • It is often difficult to interpret narrative data since it lacks the built-in structure found in numerical data.
  • The assessment purpose and questions can help guide/direct the focus of the data organization. The following strategies may also be helpful when analyzing narrative data.
  1. Focus groups and Interviews:
  1. Read and organize the data from each question separately. This approach permits focusing on one question at a time (e.g., experiences with tutoring services, characteristics of tutor, student responsibility in the tutoring process).
  2. Group the comments by themes, topics, or categories. This approach allows for focusing on one area at a time (e.g., characteristics of tutor – level of preparation, knowledge of content area, availability).
  1. Documentary reviews:
  • Code content and characteristics of documents into various categories (e.g., training manual – policies and procedures, communication, responsibilities).
  1. Observations:
  • Code patterns from the focus of the observation (e.g., behavioral patterns – amount of time engaged/not engaged in activity, type of engagement, communication, interpersonal skills).
  • It is important to Note that:-
  • The analysis of the data via statistical measures and/or narrative themes should provide answers to the assessment questions;
  • Interpreting the analyzed data from the appropriate perspective allows for determination of the significance and implications of the assessment.
ETHICAL ISSUES IN MASS COMMUNICATION RESEARCH
Definition: Ethics are norms or standards of behavior that guide moral choices about our behavior and our relationships with others.
  • The goal of ethics in media research, like in other field of social science, is to ensure that no one is harmed or suffers adverse consequences from research activities.
  • This objective is usually achieved. However, unethical activities are pervasive and include violating nondisclosure agreements, breaking respondent confidentiality, misrepresenting results, deceiving people, invoicing irregularities, avoiding legal liability, and more.
  • Ethical Codes and regulations guide researchers and sponsors on the ‘dos’ and ‘dons’ while conducting in study.
  • Research supervisors often help researchers examine their research proposals for ethical dilemmas. Responsible researchers anticipate ethical dilemmas and attempt to adjust the design, procedures, and protocols during the planning process rather than treating them as afterthought.
  • Ethical research requires personal integrity from the researcher, and the supervising body.
  • Codes of ethics are applicable at each stage of the research. The aim is to ensure that no one is harmed or suffers adverse consequences from research activities
Unethical activities:
  • Violating nondisclosure agreements;
  • Breaking respondent confidentiality;
  • Misrepresenting results;
  • Deceiving people;
  • Invoicing irregularities;
  • Avoiding legal liability
Anticipate ethical dilemmas:
  • Adjust the design, procedures, and protocols accordingly.
  • Research ethics require personal integrity of the researcher and research sponsor.
Parties in Research:
Mostly three parties:
  • The researcher
  • The sponsoring client (user)
  • The respondent (subject)
The interaction of each of these parties with one or both of the other two identifies a series of ethical questions. Consciously or consciously, each party expects certain rights and feels certain obligations towards the other parties.
Ethical Treatment of Respondents:
  1. When ethics are discussed in research design, first, think about protecting the rights of the participant or respondent.
  2. Whether data are collected in an experiment, interview, observation, or survey, the respondent has many rights to be safeguarded.
  3. Overall, the research in mass media should be designed in such a way that respondents do not suffer physical harm, discomfort, pain, embarrassment, or loss of privacy.
To safeguard against these, the mass communication researcher should follow three guidelines;
  1. Explain study benefits;
  2. Explain respondent rights and protections;
  3. Obtain informed consent.
Sponsor’s Ethics
  1. Occasionally, researcher may be asked by the sponsors or unscrupulous individuals (colleagues)to participate in unethical behavior.
Compliance by the researcher would be a breach of ethical standards.
  1. Researcher should void:-
  • Violating respondent confidentiality;
  • Changing data or creating false data to meet the desired objective;
  • Changing data presentation or interpretations;
  • Interpreting data from a biased perspective;
  • Omitting sections of data analysis and conclusions;
  • Making recommendations beyond the scope of data collected.
Researchers and Team Members
  • Another ethical responsibility of researchers is their team’s safety as well as their own.
  • The responsibility for ethical behavior rests with the researcher who, along with assistants, is charged with protecting the anonymity of both the sponsor and the respondent.
Safety: It is the researcher’s responsibility to design a project so the safety of all interviewers, surveyors, experimenters, or observers is protected.
  • Several factors may be important to consider in ensuring a researcher’s right to safety.
Ethical behavior of Assistants:
  • Researchers should require ethical compliance from team members just as sponsors expect ethical behavior from researcher.
  • Assistants are expected to carry out the sampling plan, to interview or observe respondents without bias, and to accurately record all necessary data.
Protection of Anonymity:
  • Researchers and assistants should protect the confidentiality of the sponsor’s information and anonymity of the respondents.
  • Each researcher handling data should be required to sign a confidentiality and nondisclosure statement.
Professional Standards
  • Various standards of ethics exist for the professional researcher in mass communication. Media entities, PR firms, media professional associations, and universities have code of ethics. These codes of ethic have to be enforced by the researcher.