Home Random Page





Computer-aided manufacturing (CAM) is the use of computer-based software tools that assist engineers and machinists in manufacturing or prototyping product components. Its primary purpose is to create a faster production process and components with more precise dimensions and material consistency, which in some cases, uses only the required amount of raw material (thus minimizing waste), while simultaneously reducing energy consumption. CAM is a programming tool that makes it possible to manufacture physical models using computer-aided design (CAD) programs. CAM creates real life versions of components designed within a software package. CAM was first used in 1971 for car body design and tooling.


Traditionally, CAM has been considered as a numerical control (NC) programming tool wherein three-dimensional (3D) models of components generated in CAD software are used to generate CNC code to drive numerically controlled machine tools.

Although this remains the most common CAM function, CAM functions have expanded to integrate CAM more fully with CAD/CAM/CAE PLM solutions.

As with other “Computer-Aided” technologies, CAM does not eliminate the need for skilled professionals such as Manufacturing Engineers and NC Programmers. CAM, in fact, both leverages the value of the most skilled manufacturing professionals through advanced productivity tools, while building the skills of new professionals through visualization, simulation and optimization tools.

The first commercial applications of CAD were in large companies in the automotive and aerospace industries for example UNISURF in 1971 at Renault for car body design and tooling.

Historical shortcomings

Historically, CAM software was seen to have several shortcomings that necessitated an overly high level of involvement by skilled CNC machinists. Fallows created the first CAM software but this had severe shortcomings and was promptly taken back into the developing stage. CAM software would output code for the least capable machine, as each machine tool interpreter added on to the standard g-code set for increased flexibility. In some cases, such as improperly set up CAM software or specific tools, the CNC machine required manual editing before the program will run properly. None of these issues were so insurmountable that a thoughtful engineer could not overcome for prototyping or small production runs; G-Code is a simple language. In high production or high precision shops, a different set of problems were encountered where an experienced CNC machinist must both hand-code programs and run CAM software.

Integration of CAD with other components of CAD/CAM/CAE PLM environment requires an effective CAD data exchange. Usually it had been necessary to force the CAD operator to export the data in one of the common data formats, such as IGES or STL, that are supported by a wide variety of software. The output from the CAM software is usually a simple text file of G-code, sometimes many thousands of commands long, that is then transferred to a machine tool using a direct numerical control (DNC) program.

CAM packages could not, and still cannot, reason as a machinist can. They could not optimize toolpaths to the extent required of mass production. Users would select the type of tool, machining process and paths to be used. While an engineer may have a working knowledge of g-code programming, small optimization and wear issues compound over time. Mass-produced items that require machining are often initially created through casting or some other non-machine method. This enables hand-written, short, and highly optimized g-code that could not be produced in a CAM package.

At least in the United States, there is a shortage of young, skilled machinists entering the workforce able to perform at the extremes of manufacturing; high precision and mass production. As CAM software and machines become more complicated, the skills required of a machinist advance to approach that of a computer programmer and engineer rather than eliminating the CNC machinist from the workforce.

Over time, the historical shortcomings of CAM are being attenuated, both by providers of niche solutions and by providers of high-end solutions. This is occurring primarily in three arenas:

- Ease of use

- Manufacturing complexity

- Integration with PLM and the extended enterprise

Ease in use

For the user who is just getting started as a CAM user, out-of-the-box capabilities providing Process Wizards, templates, libraries, machine tool kits, automated feature based machining and job function specific tailorable user interfaces build user confidence and speed the learning curve. User confidence is further built on 3D visualization through a closer integration with the 3D CAD environment, including error-avoiding simulations and optimizations.

Manufacturing complexity

The manufacturing environment is increasingly complex. The need for CAM and PLM tools by the manufacturing engineer, NC programmer or machinist is similar to the need for computer assistance by the pilot of modern aircraft systems. The modern machinery cannot be properly used without this assistance. Today's CAM systems support the full range of machine tools including: turning, 5 axis machining and wire EDM. Today’s CAM user can easily generate streamlined tool paths, optimized tool axis tilt for higher feed rates and optimized Z axis depth cuts as well as driving non-cutting operations such as the specification of probing motions.

Integration with PLM and the extended enterprise

Today’s competitive and successful companies have used PLM to integrate manufacturing with enterprise operations from concept through field support of the finished product. To ensure ease of use appropriate to user objectives, modern CAM solutions are scalable from a stand-alone CAM system to a fully integrated multi-CAD 3D solution-set. These solutions are created to meet the full needs of manufacturing personnel including part planning, shop documentation, resource management and data management and exchange.

Machining process

Most machining progresses through four stages, each of which is implemented by a variety of basic and sophisticated strategies, depending on the material and the software available. The stages are:


This process begins with raw stock, known as billet, and cuts it very roughly to shape of the final model. In milling, the result often gives the appearance of terraces, because the strategy has taken advantage of the ability to cut the model horizontally. Common strategies are zig-zag clearing, offset clearing, plunge roughing, rest-roughing.


This process begins with a roughed part that unevenly approximates the model and cuts to within a fixed offset distance from the model. The semi-finishing pass must leave a small amount of material so the tool can cut accurately while finishing, but not so little that the tool and material deflect instead of shearing. Common strategies are raster passes, waterline passes, constant step-over passes, pencil milling.


Finishing involves a slow pass across the material in very fine steps to produce the finished part. In finishing, the step between one pass and another is minimal. Feed rates are low and spindle speeds are raised to produce an accurate surface.

Contour milling

In milling applications on hardware with five or more axes, a separate finishing process called contouring can be performed. Instead of stepping down in fine-grained increments to approximate a surface, the workpiece is rotated to make the cutting surfaces of the tool tangent to the ideal part features. This produces an excellent surface finish with high dimensional accuracy.

Areas of usage

- In mechanical engineering

- In machining

- In electronic design automation, CAM tools prepare printed circuit board (PCB) and integrated circuit design data for manufacturing.

Computer-Integrated Manufacturing (CIM) in engineering is a method of manufacturing in which the entire production process is controlled by computer. The traditional separated process methods are joined through a computer by CIM. This integration allows that the processes exchange information with each other and they are able to initiate actions. Through this, integration the manufacturing can be faster and with fewer errors, although the main advantage is the ability to create automated manufacturing processes. Typically CIM relies on closed-loop control processes, based on real-time input from sensors. It is also known as flexible design and manufacturing.




Why is BASIC synonymous with personal computers today? It started in the early 1970s, when two Seattle teenagers named Bill Gates and Paul Allen worked for a computer firm. Gates and Allen were among the first hackers who spent almost every waking hour figuring out how mainframe and minicomputer operating systems worked. They had convinced a Seattle computer firm to let them work on the company's PDP-10 minicomputer to solve the company's systems problems. Their success in this project made a name for them.

Two years later, after both men had tried college, they read about the first personal computer—the Altair. The Altair, which used an Intel 8080 chip, was built by a firm called MITS. Gates and Allen called MITS and proposed to develop a version of BASIC that would run on the Altair personal computer. Because they did not have an Altair computer, Allen used that computer's specifications to develop a simulated Altair on the PDP minicomputer. With only the simulated Altair, the two developed a form of BASIC that they thought would work on the real computer. Less than two months after the original contact with MITS, the software was finished, and Gates and Allen were on a plane to Albuquerque to demonstrate their Altair BASIC for MITS. In true storybook fashion, the BASIC worked the first time it was loaded on the Altair, even though Gates and Allen had never seen one before.

After this success, Gates and Allen formed a company called Microsoft to make their version of BASIC—the rest is history. Today, Microsoft BASIC is an industry standard.




Just as carpenters need tools to build a house, programmers need tools to build an information system. The difference is that carpenters' tools are of a physical nature, while programmers' tools are computer languages. And just as carpenters select a particular tool to do a particular job on the house, programmers select the computer language they will use based on the type of program to be written. The question is, how do programmers choose the necessary language for an application? There are more programming languages available today than most people could learn in a lifetime, so programmers need some rules to follow in selecting a language for a particular task. This selection process can be divided into three steps: (1) determine the application to be developed, (2) identify the features the language must have in order to deal with that application, and (3) consider the practical ramifications of choosing a particular language.

In characterizing the application being developed, a programmer should consider the size and type of application and how close to the level of the machine the language must be. One language may be great for handling very large business applications but useless for controlling a machine on a second-by-second basis. Similarly, one language may be good for doing scientific computations but not very good for producing multicolor graphics.

In choosing a language to meet the needs of the application, it is best first to list the features the application requires. These can be used to rate each list of languages from which one may be chosen. Features might include the class of problems for which the language was designed, the clarity of the language syntax (grammar), the types of data that are supported, and the ease with which a program can be moved between machines.

After the programmer develops a short list of potential languages by comparing language characteristics to the features required by the application, the last step is to consider practical questions like how much support the language requires, whether the language is available or will have to be purchased, and how much knowledge of the language the programmer already has. Once these questions are answered, the list of languages will probably have been whittled down to only one or two choices.




In the software development process, which is part of the systems life cycle, a crucial step is finding and correcting errors - so-called "bugs." As software becomes more complex, the process of "debugging" each program of a software package also becomes more difficult. Programmers now spend at least as much time debugging their work as they do in actually writing their programs. Nevertheless, it is generally acknowledged that bugs still exist in most commercial software. In fact, most software sold today carries a disclaimer stating, in effect, that the package is not guaranteed to work! Writing bug-free software is inherently difficult, because the logic supporting the program is inflexible. In most engineering projects, a margin of error is built into the design specifications, so a bridge, for example, usually will not collapse if an element is defective or fails. With computer software, on the other hand, each program instruction must be correct. Otherwise, the whole program may fail.

Examples of problems attributed to faulty software include the following:

- 1,800 automatic teller machines at a major bank in Tokyo shut down on payday.

- An airline's reservation system failed, forcing 14,000 travel agents to book flights manually.

- The air traffic control system at Dallas-Fort Worth International Airport began spitting gibberish, forcing controllers to track planes on paper.

- A bug in the Pennsylvania state lottery computer system allowed clerks to buy winning tickets after a drawing—about 465 winning tickets were punched out before the error was detected.

- Problems with telephone systems have included the AT&T outage in 1990 and the disrupted service to 10 million people in five states and the District of Columbia in 1991.

- A problem with the computer system that directed the Patriot antimissile system allowed a Scud missile to hit a barracks, killing 28 U.S. service people during the Gulf War. The error, attributed to a "freak" combination of ten abnormal variables, was not detected in thousands of hours of testing.




Ŕt Somers Correctional Institute, Connecticut's only maximum security prison, a group of prisoners is spending time writing programs. The inmates wrote the programs not just for their use but for some of the prison's operations. For example, the group wrote programs in dBASE to automate the prison's pharmacy, track the status of prisoners for prison administrators, and monitor job and pay schedules for the 1,400 inmates at Somers.

Some prisoners were in the computer field before entering prison. Others, however, learned computer programming through the prison's educational program. Many prisoners had not seen a personal computer before their incarceration but learned a lot in the six hours a day they spent in school. When administrators were confident that prisoners could write the programs, they began assigning projects to them, and prisoners have taken over many responsibilities associated with the prison's restricted information system.

The members of the group work six days a week, 10 to 12 hours a day. They believe they are gaining valuable experience that will help them when they are released from prison. In recognition of the group's effort, it has been designated as a tester of BASE IV. Members believe their work is important—so important that one member of the group turned down an opportunity to move to a medium security prison, because it did not have computers.




It started innocently enough: At 2:25 p.m. on January 22, 1990, a New York City computer on the gigantic AT&T long-distance telephone network somehow came to believe it was overloaded and started to reject phone calls. When other computers on the network tried to pick up the slack for the supposedly overburdened New York City computer, they, too, exhibited the same weird symptoms. Eventually all 114 computers on the AT&T network were affected, and long-distance callers using the AT&T system all across the United States began to receive a busy signal or to hear the now famous "All circuits are busy" message. The problem continued for over nine hours, during which time AT&T lost between $60 million and $75 million in revenues. Companies who depend heavily on long-distance phone service for telemarketing were also big losers on this day. Some companies even sent their employees home because they could not make needed calls.

And what was the cause of this problem in the nation's largest long- distance network? Engineers traced the problem to a single logic error or "bug" in the software used to route calls on the AT&T network. This particular bug, like many others, arose during an effort to improve an existing software system. All of AT&T's computers use the same software, so all were affected by the same problem in logic.

While software designers are always working to ensure that software is bug-free, this is not always possible. A program like AT&T's faulty switching system can contain a million lines of code. Even when fault-tolerant computer systems attempt to reduce runaway system failure by having modules that check on each other, the entire system can go down if all modules simultaneously suffer the same malady.




At the Car Product Development (CPD) Division of Ford Motor Company, over 150 end users can access multiple sources of data with a new mainframe computer interface called VIEW, which is an acronym for Virtual Interface Engineering Window. With VIEW, end users can access mainframe data base systems, such as DB2 and IMS and various types of flat files, to create necessary reports. The system especially benefits the information systems department, because it no longer has to create a second set of information for users to access. Now, users can go directly to the data bases they need, which saves system resources.

VIEW was written in four months, in the Focus 4GL, by systems analyst Wendy Balaka. It is menu driven and allows users to easily access data bases and create reports. For example, VIEW users created 12 reports in two months. It would have taken two years to create the reports with Ford's previous system, which required systems analysts to make repeated visits to clients to ask them what they want.

Users can store report formats in their personal libraries to create and print reports on the fly or overnight. VIEW also contains a global library, which lets users share their report formats with other users. The system is available only on the CPD mainframe, but there are plans to develop a PC equivalent so PC users can access data from a mainframe and download it to a spreadsheet or word processor.




Bus transportation is a large expense for many school districts. Reducing this cost is the objective of the Transportation Information Management System (TIMS), which all 133 school districts in North Carolina use. This personal computer-based decision support system (DSS) lets local school districts create efficient routes and schedules for their school buses.

TIMS has four modules: (1) geocoding, which digitizes the school district's street network maps; (2) transportation, which maintains all routing and scheduling information, including the location of each student; (3) optimization, which tries to identify the least costly bus routes and schedules and enables administrators to perform "what if?" analyses; and (4) boundary planning, which provides demographic data on students.

TIMS's data base contains information on the district's streets and roads, bus travel speed, student locations, bus stops, bus runs, and bus routes. The models are designed to determine how to reduce the time and distance of all bus runs based on the capacity of each bus. In tests in the Randolph County school district, TIMS reduced school bus mileage by 6 percent and was instrumental in reassigning or parking nine buses.

A widely used spin-off of the primary purpose of TIMS is the boundary planning module. This module allows administrators to plan school attendance districts based on student location information stored in the TIMS data base. It provides administrators with precise information about number of students, racial makeup of the student population, and distribution of students across grades for analyzing existing, new, or proposed attendance boundaries.


Date: 2015-02-03; view: 2109

<== previous page | next page ==>
doclecture.net - lectures - 2014-2023 year. Copyright infringement or personal data (0.011 sec.)