Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






The Problem with Personal Computing Today

Will any of the foregoing efforts achieve the broad-based ease of use that cus­tomers are looking for? We are in an era that resembles the early days of videocassette recorders, before VHS finally triumphed over Betamax as the standard for VCRs. What is needed, as Bill Gates observes, is computer soft­ware "designed to take everyday tasks and make them automatic, and to take complex tasks and make them easier."

Today personal computing is complicated because of conflicting standards. Could it be different tomorrow as more and more people join the trend toward networked computers and access to the World Wide Web?

As we've seen, there are different hardware and software standards, or "platforms." Platform means the particular hardware or software standard on which a computer system is based. Examples are the Macintosh platform ver­sus the IBM-compatible platform, or Unix versus Windows NT. Developers of applications software, such as word processors or database managers, need to make different versions to run on all the platforms.

Networking complicates things even further. "Text, photos, sound files, video, and other kinds of data come in so many different formats that it's nearly impossible to maintain the software needed to use them," points out one writer. "Users must steer their own way through the complex, upgrade-crazy world of computing."

Today microcomputer users who wish to access online data sources must provide not only their own computer, modem, and communications software but also their own operating system software and applications software. Could this change in the future?

Personal Computing Tomorrow

Today you must take responsibility for making sure your computer system will be compatible with others you have to deal with. (For instance, if a Mac­intosh user sends you a file to run on your IBM PC, it's up to you to take the trouble to use special software that will translate the file so it will work on your system.) What if the responsibility for ensuring compatibility between different systems were left to online service providers?

In this future model, you would use your Web browser to access the World Wide Web and take advantage of applications software anywhere on the network. It would not matter what oper­ating system you used. Applications software would become nearly dispos­able. You would download applications software and pay a few cents or a few dollars for each use. You would store frequently used software on your own computer. You would not need to worry about buying the right software, since it could be provided online whenever you needed to accomplish a task.

Bloatware or the Network Computer?

A new concept has entered the language, that of "bloatware." Bloatware is a colloquial name for software that is so crowded ("bloated") with features that it requires a powerful microprocessor and enormous amounts of main memory and hard-disk storage capacity to run efficiently. Bloatware, of course, fuels the movement toward upgrading, in which users must buy more and more powerful hardware to support the software. Windows 95 and the various kinds of software suites, or "officeware," are examples of this kind of software.



Against this, engineers have proposed the idea of the "network computer" or "hollow PC." This view—which not everyone accepts—is that the expen­sive PCs with muscular microprocessors and operating systems would be replaced by network computers costing perhaps $500 or so. Also known as the Internet PC, the network computer (NC) would theoretically be a "hol­lowed out" computer, perhaps without even a hard disk, serving as a mere terminal or entry point to the online universe. The computer thus becomes a peripheral to the Internet.

A number of companies are touting the $500 network computer: Sun Microsystems, Netscape Communications, Oracle, IBM, Apple Computer, Hewlett-Packard, AT&T, Silicon Graphics, Toshiba, and several others. Why would companies like these, some of them hardware manufacturers, support the notion of selling an inexpensive box? The answer is: to keep Microsoft from further dominating the world of computing.

The concept of the "hollow PC" raises some questions:

* Would a Web browser become the operating system? Or will existing operating systems expand, as in the past, taking over browser functions?

* would functions become the entire computer, as proponents of the network PC contend? Or would they simply become part of the personal computer's existing repertoire of skills?

*Would a network computer really be user friendly? At present, features such as graphical user interfaces require lots of hardware and software.

* Aren't high-speed connections required"? Even users equipped with the fastest modems would find downloading even small programs ("applets") time-consuming. Doesn't the network computer ultimately depend on faster connections than arc possible with the standard telephone lines and modems now in place?

* Most trends in computing have moved toward personal control and access, as from the main­frame to the microcomputer. Wouldn't a network computer that leaves most functions with online sources run counter to that trend?

* Would users go for if? Would computer users really prefer scaled-down generic software that must be retrieved from the Internet each time it is used? Would a pay-per-use system tied to the Internet really be cheaper in the long run.

 

 

9. Hardware Categories

Hard­ware consists of all the machinery and equipment in a computer system. The hardware includes, among other devices, the keyboard, the screen, the printer, and the computer or processing device itself.

In general, computer hardware is categorized according to which of the five computer operations it performs:

* Input * Secondary storage

* Processing and memory * Communications

* Output

External devices that are connected to the main computer cabinet are referred to as "peripheral devices." A peripheral device is any piece of hard­ware that is connected to a computer. Examples are the keyboard, mouse, monitor, and printer.

Input Hardware

Input hardware consists of devices that allow people to put data into the computer in a form that the computer can use. For example, input may be by means of a keyboard, mouse, or scanner. The keyboard is the most obvious. The mouse is a pointing device attached to many microcomputers. An example of a scanner is the grocery-store bar-code scanner.

* Keyboard: A keyboard includes the standard typewriter keys plus a number of specialized keys. The standard keys are used mostly to enter words and numbers. Examples of specialized keys are the function keys, labeled Fl, F2, and so on. These special keys are used to enter commands.

* Mouse: A mouse is a device that can be rolled about on a desktop to direct a pointer on the computer's display screen. The pointer is a sym­bol, usually an arrow, on the computer screen that is used to select items from lists (menus) or to position the cursor. The cursor is the symbol on the screen that shows where data may be entered next, such as text in a word processing program.

* Scanners: Scanners translate images of text, drawings, and photos into digital form. The images can then be processed by a computer, dis­played on a monitor, stored on a storage device, or communicated to another computer.

Processing & Memory Hardware

The brains of the computer are the processing and main memory devices, housed in the computer's system unit. The system unit, or system cabinet, houses the electronic circuitry, called the CPU, which does the actual pro­cessing and the main memory, which supports processing.

* CPU—the processor: The CPU, for Central Processing Unit, is the processor, or computing part of the computer. It controls and manipulates data to produce information. In a personal computer the CPU is usually a single fingernail-size "chip" called a microprocessor, with electrical cir­cuits printed on it. This microprocessor and other components necessary to make it work are mounted on a main circuit board called a mother­board or system board.

* Memory—working storage: Memory—also known as main memory, RAM, or primary storage—is working storage.

Output Hardware

Output hardware consists of devices that translate information processed by the computer into a form that humans can understand.

There are three principal types îf output hardware—screens, printers, and sound output devices.

Two sizes of diskettes are used for microcomputers. The older and larger size is 5J/4 inches in diameter. The smaller size is 31/2 inches.

To use a diskette, you need a disk drive. A disk drive is a device that holds and spins the diskette inside its case; it "reads" data from and "writes" data to the disk. The words read and write are used a great deal in computing.

Read means that the data represented in magnetized spots on the disk (or tape) are converted to electronic signals and transmitted to the mem­ory in the computer.

Write means that the electronic information processed by the computer is recorded onto disk (or tape).

*Hard disk:A hard disk
is a disk made out of metal and covered with a magnetic recording sur­face
It also holds data represented by the presence (1) and absence (0) of
magnetized spots.

*Magnetic tape:With early computers,
"mag. tape" was the principal method of secondary storage.

The magnetic tape used for computers is made from the same material as that used for audiotape and videotape. That is, magnetic tape is made of flexible plastic coated on one side with a magnetic material; again, data is represented by the presence and absence of magnetized spots.

A tape that is a duplicate or copy of another form of storage is referred to as a backup.

Optical disk—CD-ROM: An optical disk is a disk that is written and read by lasers. CD-ROM, which stands for Compact Disk Read Only Memory, is one kind of optical-disk format that is used to hold text, graphics, and sound. CD-ROMs can hold hundreds of times more data than diskettes, and can hold more data than many hard disks.

 

10. Artificial Intelligence

Artificial intelligence (AI) is a research and applications discipline that includes the areas of robotics, perception systems, expert systems, natural language processing, fuzzy logic, neural networks, and genetic algorithms. Another area, artificial life, is the study of computer instructions that act like living organisms.

Behind all aspects of AI are ethical questions.

Artificial intelligence (AI) is a group of related technologies that attempt to develop machines to emulate human-like qualities, such as learning, rea­soning, communicating, seeing, and hearing. Today the main areas of AI are:

· Robotics

· Perception systems

· Expert systems

· Natural language processing

· Fuzzy logic

· Neural networks

· Genetic algorithms

Robotics

Robotics is a field that attempts to develop machines that can perform work normally done by people. The machines themselves, of course, are called robots. A robot is an automatic device that performs functions ordinarily ascribed to human beings or that operates with what appears to be almost human intelligence.

Expert Systems

An expert system is an interactive computer program that helps users solve problems that would otherwise require the assistance of a human expert.

The programs simulate the reasoning process of experts in certain well-defined areas. That is, professionals called knowledge engineers interview the expert or experts and determine the rules and knowledge that must go into the system. Programs incorporate not only surface knowledge ("textbook knowledge") but also deep knowledge ("tricks of the trade"). An expert system consists of three components:

· Knowledge base: A knowledge base is an expert system's database of knowledge about a particular subject. This includes relevant facts, infor­mation, beliefs, assumptions, and procedures for solving problems. The basic unit of knowledge is expressed as an IF-THEN-ELSE rule ("IF this happens, THEN do this, ELSE do that"). Programs can have as many as 10,000 rules. A system called ExperTAX, for example, which helps accoun­tants figure out a client's tax options, consists of over 2000 rules.

· Inference engine: The inference engine is the software that controls the search of the expert system's knowledge base and produces conclusions. It takes the problem posed by the user of the system and fits it into the rules in the knowledge base. It then derives a conclusion from the facts and rules contained in the knowledge base.

Reasoning may be by a forward chain or backward chain. In the forward chain of reasoning, the inference engine begins with a statement of the problem from the user. It then proceeds to apply any rule that fits the problem. In the backward chain of reasoning, the system works backward from a question to produce an answer.

· User interface: The user interface is the display screen that the user deals with. It gives the user the ability to ask questions and get answers. It also explains the reasoning behind the answer.

Natural Language Processing

Natural languages are ordinary human languages, such as English. (A second definition is that they are programming languages, called fifth-generation lan­guages, that give people a more natural connection with comput­ers. Natural language processing is the study of ways for computers to rec­ognize and understand human language, whether in spoken or written form. The problem with human language is that it is often ambiguous and often interpreted differently by different listeners.

Still, some natural-language systems are already in use. Intellect is a prod­uct that uses a limited English vocabulary to help users orally query data­bases. LUNAR, developed to help analyze moon rocks, answers questions about geology from an extensive database. Verbex, used by the U.S. Postal Service, lets mail sorters read aloud an incomplete address and then replies with the correct zip code.

In the future, natural-language comprehension may be applied to incom­ing e-mail messages, so that such messages can be filed automatically. How­ever, this would require that the program understand the text rather than just look for certain words.

Fuzzy Logic

A relatively new concept being used in the development of natural languages is fuzzy logic. The traditional logic behind computers is based on either/or, yes/no, true/false reasoning. Such computers make "crisp" distinctions, lead­ing to precise decision making. Fuzzy logic is a method of dealing with imprecise data and uncertainty, with problems that have many answers rather than one. Unlike classical logic, fuzzy logic is more like human rea­soning: it deals with probability and credibility. That is, instead of being sim­ply true or false, a proposition is mostly true or mostly false, or more true or more false.

Neural Networks

Fuzzy logic principles are being applied in another area of AI, neural net­works. Neural networks use physical electronic devices or software to mimic the neurological structure of the human brain. Because they are structured to mimic the rudimentary circuitry of the cells in the human brain, they learn from example and don't require detailed instructions.

To understand how neural networks operate, let us compare them to the operation of the human brain.

· The human neural network: The word neural comes from neurons, or nerve cells. The neurons are connected by a three-dimensional lattice called axons. Electrical connections between neurons are activated by synapses.

· The computer neural network: In a hardware neural network, the nerve cell is replaced by a transistor, which acts as a switch. Wires connect the cells (transistors) with each other. The synapse is replaced by an electronic component called a resistor, which determines whether a cell should acti­vate the electricity to other cells. A software neural network emulates a hardware neural network, although it doesn't work as fast.

Computer-based neural networks use special AI software and compli­cated fuzzy-logic processor chips to take inputs and convert them to out­puts with a kind of logic similar to human logic.

Neural networks are already being used in medicine and in banking business.

Genetic Algorithms

A genetic algorithm is a program that uses Darwinian principles of random mutation to improve itself. The algorithms are lines of computer code that act like living organisms. A hybrid expert system-genetic algorithm called Engeneous was used to boost perfor­mance in the Boeing 777 jet engine, it involved billions of calculations.

Computer scientists still don't know what kinds of problems genetic algo­rithms work best on. Still, as one article pointed out, "genetic algorithms have going for them something that no other computer technique does: they have been field-tested, by nature, for 3.5 billion years.

Genetic algorithms would seem to lead us away from mechanistic ideas of artificial intelligence and into more fundamental questions: "What is life, and how can we replicate it out of silicon chips, networks, and software? " We are dealing now not with artificial intelligence but with artificial life. Artificial life, or A-life, is a field of study concerned with "creatures"—com­puter instructions, or pure information—that are created, replicate, evolve, and die as if they were living organisms.

Behind everything to do with artificial intelligence and artificial life—just as it underlies everything we do—is the whole matter of ethics. Many users are not aware that computer software, such as expert systems, is often subtly shaped by the ethical judgments and assumptions of the people who create it.

We must take into consideration that human create such technologies, use it, and have to live with the results.

 

TEXTS for TRANSLATION


Date: 2015-04-20; view: 1282


<== previous page | next page ==>
The Future of Output | Computer differences
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.01 sec.)