Friday, October 10, 2008


Eco-money is the name of many Japanese community currencies, used to connect neighbours in obtaining the goods and services they need.

In the town of Kuriyama, Hokkaidō, for instance, second grader Ami Hasegawa paid 1,000 kurins to get her favorite toy fixed. The Kurin is the local currency that was named after the township. Ami's father earned 3,000 kurins for fixing the handrail of a staircase in a neighbor's house. And her mother paid 1,000 kurins to an elderly man who wrote addresses for her on postcards in beautiful handwriting.

In spring 1999 Kusatsu in Shiga Prefecture became the first city in Japan to use eco-money, calling it the Ohmi, which is what the prefecture was called in the old days. Several other cities followed suit with currencies of their own, with Matsue, Shimane Prefecture, calling it the dagger (borrowed from the local dialect) and Takaoka in Toyama Prefecture.

Some 30 more communities across Japan are introducing such currencies. Some municipalities plan to use the money to plant trees and reduce garbage.

Eco-Money Network Secretary General Masanari Nakayama stated, "Eco-money is a way of getting neighbors to help each other out and to deepen their ties to the community."

General insurance

General insurance or non-life insurance policies, including automobile and homeowners policies, provide payments depending on the loss from a particular financial event. General insurance typically comprises any insurance that is not determined to be life insurance. It is called property and casualty insurance in the U.S..

In the UK, General insurance is broadly divided into three areas; personal lines, commercial lines and London market.

The London market insures large commercial risks, for example insuring supermarkets, football players and other very specific risks. It consists of a number of insurers, reinsurers, [P&I Clubs], brokers and other companies that are typically physically located in the City of London. The Lloyd's of London is a big participant in this market. The London Market also participates in personal lines and commercial lines, domestic and foreign, through reinsurance.


Staycation is a period of time in which an individual or family stays at home and relaxes at home or takes day trips from their home to area attractions. Staycations have achieved high popularity in current hard economic times in which unemployment levels and gas prices are high.

As staycationers are close to their places of employment, they may be tempted to go to work at least part of the time, and their bosses may feel their employees are available to be called into work. Staycationers also have access to their email at home as they would regularly, allowing them to be contacted, and feeling the temptation to keep up with this contact.

Staycationers may spend money they had not planned as retailers and other advertisers offer "deals" to encourage staycationers to spend money. These may include hotels making package deals in hopes of luring planned staycationers to do some travel.

Peak oil

Peak oil is the point in time when the maximum rate of global petroleum extraction is reached, after which the rate of production enters terminal decline. The concept is based on the observed production rates of individual oil wells, and the combined production rate of a field of related oil wells. The aggregate production rate from an oil field over time appears to grow exponentially until the rate peaks and then declines, sometimes rapidly, until the field is depleted. It has been shown to be applicable to the sum of a nation’s domestic production rate, and is similarly applied to the global rate of petroleum production. It is important to note that peak oil is not about running out of oil, but the peaking and subsequent decline of the production rate of oil.

Value Averaging

What Does it Mean?
An investing strategy that works like dollar cost averaging (DCA) in terms of steady monthly contributions, but differs in its approach to the amount of each monthly contribution. In value averaging, the investor sets a target growth rate or amount on his or her asset base or portfolio each month, and then adjusts the next month's contribution according to the relative gain or shortfall made on the original asset base.

Investopedia Says...
For example, suppose an account has a value of $2,000 and the goal is for the portfolio to increase by $200 every month. If, in a month's time, the assets have grown to $2,024, the investor would fund the account with $176 ($200 - $24) worth of assets

Tuesday, October 7, 2008

Bear market

A bear market is described as being accompanied by widespread pessimism. Investors anticipating further losses are often motivated to sell, with negative sentiment feeding on itself in a vicious circle. The most famous bear market in history was preceded by the Wall Street Crash of 1929 and lasted from 1930 to 1932, marking the start of the Great Depression. A milder, low-level, long-term bear market occurred from about 1973 to 1982, encompassing the stagflation of U.S. economy, the 1970's energy crisis, and the high unemployment of the early 1980s.
Prices fluctuate constantly on the open market; a bear market is not a simple decline, but a substantial drop in the prices of the majority of stocks in a given market over a defined period of time. According to The Vanguard Group, "While there’s no agreed-upon definition of a bear market, one generally accepted measure is a price decline of 20% or more over at least a two-month period."

Arsenal vs Sunderland

In the last weekend,hull city smashed arsenal by a wonderful header and long shot.after that lose,arsenal came up strongly but they can't do up to the mark.A third League defeat of the season would have been tough on the Gunners who had a seemingly legitimate goal disallowed in the second-half when the assistant referee adjudged that Theo Walcott had taken the ball out of play before pulling it back for Van Persie to tap home.An injury-time header from Cesc Fabregas earned Arsenal a 1-1 draw with Sunderland at the Stadium of Light.Substitute Grant Leadbitter's stunning 86th minute strike had given the hosts the lead but the Spanish midfielder nipped in, three minutes into stoppage time, to nod home Robin Van Persie's corner.

Sourav Ganguly retirement

Sourav Ganguly former Indian captain had announced his retirement after the upcoming australia series.Ganguly, 36, has scored 6888 runs in 109 Tests, with 15 hundreds. He had played 49 Tests as captain, the most number by any Indian. The 21 matches won during his tenure is also an Indian record, and his winning percentage of over 40 is the highest for players who have captained India in more than one Test. Starting with a hundred on debut, Ganguly's Test average has never dipped below 40.In 311 ODIs, he had scored 11,363 runs at average of 41.02. He captained India in 147 ODIs. He played his last ODI against Pakistan in Gwalior on November 15, 2007. He is one of only three players to complete the treble of 10,000 runs, 100 wickets and 100 catches in ODIs, Sanath Jayasuriya and Sachin Tendulkar being the others.

Wednesday, September 24, 2008

Arsenal vs Sheffield united:

Arsenal by winning the Sheffield united by a goal difference of 6-0 had made a history in the football career. After winning Bolton in an away match (epl), the young team had proven their talent in the carling cup third round. Carlos vela, the 18 year old boy had scored a hat-trick in this match and Wilshere, Bendtner had also added their goals to their team. Manu, Liverpool had also won the points and they are also marching to the next round along with arsenal. This weekend arsenal is going to against Hull city at home. Arsenal is now at the top of the epl table and they have to win the future matches in order to retain the top spot.

Friday, September 19, 2008

This weekend Bolton VS Arsenal

Arsenal, after a smart performance in the uefa will definitely look for a big break through in epl.though they won the previous epl match against Blackburn with a great goal difference of 4-0, they should really work hard to break the Bolton strong team. Nasri, one of the key player of arsenal (came to arsenal this summer) will miss the clash against Bolton this weekend. The others players Tomas Rosicky, Diaby, Bischoff and striker Eduardo are also in the sidelined list. In this match, Walcott will be the key for winning the match. The midfielder’s should help the forward by their good passing skills. There is an interesting rival match, this weekend Manu vs Chelsea. So this weekend will be really a good feast to football fans.

Arsenal VS Dynamo Kiev

After losing to Liverpool in the last year uefa champions league quarterfinals, arsenal had now come back with some lot of expectations. Arsenal belonging to group G, they first started their race against dynamo Kiev. This is an away match for arsenal. There was some irregular passing between the arsenal players at the start of the play. The first half ended goalless and in the second half, dynamo Kiev scored their first goal in the league which is a penalty followed by the mistake of sagna.After that arsenal player showed some passion in the game. At the 88th minutes an equalizing goal had been scored by the captain gallas and the match ended with a compromising result 1-1(draw).Again this both teams will meet in the arsenal’s home match to race for their group stage place. Let’s hope a good play in the future

Friday, September 12, 2008


Australia after winning a ODI series against Bangladesh, is now coming to the land of Tigers to encounter Indians. They are going to play 4 test matches in different Indian grounds. Indians though they have won the ODI series against Srilanka, they have not performed well in their previous test series. The four senior batsmen’s didn’t play up to their mark in the past. I think they have to learn from their mistakes and they should perform well to win the series against the giant Australians, who were really performing well against all teams in test matches. Let’s Hope the Best.

Thursday, August 28, 2008

Random access memory

Random access memory (usually known by its acronym, RAM) is a type of computer data storage. Today it takes the form of integrated circuits that allow the stored data to be accessed in any order, i.e. at random. The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data.This contrasts with storage mechanisms such as tapes, magnetic discs and optical discs, which rely on the physical movement of the recording medium or a reading head. In these devices, the movement takes longer than the data transfer, and the retrieval time varies depending on the physical location of the next item.

An early type of widespread writable random access memory was the magnetic core memory, developed in 1949-1951, and subsequently used in most computers up until the development of the static and dynamic integrated RAM circuits in the late 1960s and early 1970s. Before this, computers used relays, delay lines or various kinds of vacuum tube arrangements to implement "main" memory functions (i.e. hundreds or thousands of bits), some of which were random access, some not. Latches built out of vacuum tube triodes, and later, out of discrete transistors, were used for smaller and faster memories such as registers and (random access) register banks. Prior to the development of integrated ROM circuits, permanent (or read-only) random access memory was often constructed using semiconductor diode matrixes driven by address decoders.

Types of RAM:
RAM generally store a bit of data in either the state of a flip-flop, as in SRAM (static RAM), or as a charge in a capacitor (or transistor gate), as in DRAM (dynamic RAM), EPROM, EEPROM and Flash. Some types have circuitry to detect and/or correct random faults called memory errors in the stored data, using parity bits or error correction codes. RAM of the read-only type, ROM, instead uses a metal mask to permanently enable/disable selected transistors, instead of storing a charge in them.

Monday, August 25, 2008


iPOD is a popular brand of portable media players designed and marketed by Apple Inc. and launched on October 23, 2001. As of 2008, the current product line-up includes the hard drive-based iPod Classic, the touchscreen iPod Touch, the video-capable iPod Nano, the screenless iPod Shuffle and the iPhone. Former products include the compact iPod Mini and the spin-off iPod Photo (since re-integrated into the main iPod Classic line). iPod Classic models store media on an internal hard drive, while all other models use flash memory to enable their smaller size (the discontinued mini used a Microdrive miniature hard drive). As with many other digital music players, iPods, excluding the iPod Touch, can also serve as external data storage devices. Storage capacity varies by model.Apple's iTunes software can be used to transfer music to the devices from computers using certain versions of Apple Macintosh and Microsoft Windows operating systems. users who choose not to use Apple's software or whose computers cannot run iTunes software, several open source alternatives to iTunes are also available. iTunes and its alternatives may also transfer photos, videos, games, contact information, e-mail settings, Web bookmarks, and calendars to iPod models supporting those features. Apple focused its development on the iPod line's unique user interface and its ease of use, rather than on technical capability. As of September 2007, more than 150 million iPods had been sold worldwide, making it the best-selling digital audio player series in history.


The iPod line can play several audio file formats including MP3, AAC/M4A, Protected AAC, AIFF, WAV, Audible audiobook, and Apple Lossless. The iPod Photo introduced the ability to display JPEG, BMP, GIF, TIFF, and PNG image file formats. Fifth and sixth generation iPod Classics, as well as third generation iPod Nanos, can additionally play MPEG-4 (H.264/MPEG-4 AVC) and QuickTime video formats, with restrictions on video dimensions, encoding techniques and data-rates. Originally, iPod software only worked with Mac OS; iPod software for Microsoft Windows was launched with the second generation model. Unlike most other media players, Apple does not support Microsoft's WMA audio format — but a converter for WMA files without Digital Rights Management (DRM) is provided with the Windows version of iTunes. MIDI files also cannot be played, but can be converted to audio files using the "Advanced" menu in iTunes. Alternative open-source audio formats, such as Ogg Vorbis and FLAC, are not supported without installing custom firmware onto an iPod .
During installation, an iPod is associated with one host computer. Each time an iPod connects to its host computer, iTunes can synchronize entire music libraries or music playlists either automatically or manually. Song ratings can be set on an iPod and synchronized later to the iTunes library, and vice versa. A user can access, play, and add music on a second computer if an iPod is set to manual and not automatic sync, but anything added or edited will be reversed upon connecting and syncing with the main computer and its library. If a user wishes to automatically sync music with another computer, an iPod's library will be entirely wiped and replaced with the other computer's library.

Thursday, August 7, 2008

Sun Microsystems

Sun Microsystems is a multinational vendor of computers, computer components, computer software, and information technology services, founded on 24 February 1982.[4] The company is headquartered in Santa Clara, California (part of Silicon Valley), on the former west campus of the Agnews Developmental Center.

Products include computer servers and workstations based on its own SPARC processors as well as AMD's Opteron and Intel's Xeon processors; storage systems; and, a suite of software products including the Solaris Operating System, developer tools, Web infrastructure software, and identity management applications. Other technologies of note include the Java platform and NFS.

Sun is a proponent of open systems in general and UNIX in particular and a major contributor of open source software.[5]

Sun's manufacturing facilities are located in Hillsboro, Oregon and Linlithgow, Scotland.


The initial design for what became Sun's first Unix workstation, the Sun 1, was conceived by Andy Bechtolsheim when he was a graduate student at Stanford University in Palo Alto, California. He originally designed the SUN workstation for the Stanford University Network communications project as a personal CAD workstation. It was designed as a 3M computer: 1 MIPS, 1 Megabyte and 1 Megapixel. It was designed around the Motorola 68000 processor with an advanced Memory management unit (MMU) to support the Unix operating system with virtual memory support[6], He built the first ones from spare parts obtained from Stanford's Department of Computer Science and Silicon Valley supply houses.[7]

On February 12, 1982 Vinod Khosla, Andy Bechtolsheim, and Scott McNealy, all Stanford graduate students, founded Sun Microsystems. Bill Joy of Berkeley (a primary developer of BSD), joined soon after and is counted as one of the original founders[8]. The Sun name is derived from the initials of the Stanford University Network. Sun was profitable from its first quarter in July 1982.

Sun's initial public offering was in 1986 under the stock symbol SUNW, for Sun Workstations (later Sun Worldwide).[9][10] The symbol was changed in 2007 to JAVA; Sun stated that the brand awareness associated with its Java platform better represented the company's current strategy.[11]

Sun's logo, which features four interleaved copies of the word sun, was designed by professor Vaughan Pratt, also of Stanford University. The initial version of the logo had the sides oriented horizontally and vertically, but it was subsequently redesigned so as to appear to stand on one corner.

The first Sun workstations ran a Version 7 Unix System port by UniSoft on 68000 processor-based machines.

The "Bubble" and its aftermath

During the dot-com bubble, Sun experienced dramatic growth in revenue, profits, share price, and expenses. Some part of this was due to genuine expansion of demand for web-serving cycles, but another part was synthetic, fueled by venture capital-funded startups building out large, expensive Sun-centric server presences in the expectation of high traffic levels that never materialized. The share price in particular increased to a level that even the company's executives were hard-pressed to defend. In response to this business growth, Sun expanded aggressively in all areas: head-count, infrastructure, and office space.

The bursting of the bubble in 2001 was the start of a period of poor business performance for Sun.[12] Sales dropped as the growth of online business failed to meet predictions. As online businesses closed and their assets were auctioned off, a large amount of high-end Sun hardware was available very cheaply. Much like Apple, Sun relied a great deal on hardware sales.

Multiple quarters of substantial losses and declining revenues have led to repeated rounds of layoffs,[13][14][15] executive departures, and expense-reduction efforts. In December of 2001 the share price dropped to the 1998 pre-bubble level of about one hundred dollars or so and then kept going, a rapid fall even by the standards of the high tech sector at that time. The stock dipped below 10 dollars a year later, one-tenth of its 1990 value, then quickly bounced back to 20, where it has hovered ever since. In mid-2004, Sun ceased manufacturing operations at their Newark, California facility and consolidated all of the company's US-based manufacturing operations to their Hillsboro, Oregon facility, as part of continued cost-reduction efforts.[16] In 2006 Sun closed the Newark campus completely and moved 2,300 staff to its other campuses in the area.[17]

Many companies (like E-Trade and Google) chose to build Web applications based on large numbers of the less expensive PC-class x86-architecture servers running Linux, rather than a smaller number of high-end Sun servers. They reported benefits including substantially lower expenses (both acquisition and maintenance) and greater flexibility based on the use of open-source software. That trend is slowing and may be reversing,[citation needed] given (1) the throughput and efficiency of Sun's new horizontally-scaled systems (see below) and (2) the fact that both Sun's flagship Solaris operating system and its UltraSPARC T1 processor are now fully open-source.

Higher level telecoms control systems such as NMAS and OSS service predominantly use Sun equipment. This use is due mainly to the company basing its products around a mature and very stable version of the Unix operating system and the support service that Sun provides.

Sunday, July 20, 2008


Microsoft Milan Surface Computer

The latest trend in computer interaction is touch. From Jeff Han's where multi-touch user interfaces got their first big public airing to the impending iPhone launch, everyone's thinking of innovative ways to control their machines just using your fingers. Microsoft is no exception—today they've announced the first product from what they're calling their Surface Computing group, a tabletop computer for retail outlets that's been code-named Milan. And we've got a hands-on report, with photos and video—right after the jump, of course.

Ms_sc_front_view It's an acrylic table that's 22 inches high, with a 30-inch horizontal display. Remember those tabletop arcade games in bars in the 80s? It looks something like that. Inside, there's a PC running Vista, a projector, and an array of cameras that track objects and touch on the surface of the screen. With a little special programming sauce, it all comes together in a very slick experience.

For instance, you can take a digital camera that's Wi-Fi enabled, put it down on the tabletop, and the machine recognizes it and downloads the photos. Then, you can interact with them much like actual physical photos—you can pass them around the table, shuffle them into piles to sort them, pull on the corners to zoom in or out. It's intuitive, quick, and brings a fun social aspect to a task (photo editing) that can be the very definition of tedious.

Ms_sc_screenshot_foodbev_app_2 Ms_sc_screenshot_music_app_3 Ms_sc_screenshot_phone_app_2

We've had a chance to play with Milan twice—once at CES in January, and once last week. They're demoing other slick applications. The Music application turns the table into a virtual jukebox, letting you drag songs onto a shared playlist that could power the music at a bar or restaurant. There's a Concierge application that helps you pull together an itinerary for a day out in a strange city, complete with recommendations and great looking maps.

The major focus of this first generation device is at retail and in bars and hotels. The launch partners, who will be rolling out the machines in November, are Harrah's Entertainment, Sheraton Hotels, and T-Mobile.

Ms_sc_screenshot_retail_app The T-Mobile demo was interesting. They'll be installing the machines in T-Mobile stores, and the idea is that it's something between a traditional retail experience and a website. You'll place a phone on the unit, and it will pop up not only the price, but information about the phone. You'll be able to flip through service plans and options, and when you find what you're looking for, you'll drag it onto the phone, and it will be added. At the end, you hit check out, and the phone is provisioned, and delivered to your house. It's slick.

The Microsoft folks I talked to about Milan thing that the surface computing market is a multi-billion dollar business, potentially, and having seen the demos, I think they might be right. But there are more than a few barriers to overcome. Right now, the machine is using a series of tags on some physical objects to recognize them -- that's not going to fly in the real world. The Milan team is going to have to get a lot of manufacturers and other companies to do something to help identify their gadgets.

Ms_sc_collab_map_app_2 And Microsoft is launching the platform in a very constrained way. Right now, as I've said, it's just for big retail clients, which means that you won't have a Milan coffee table any time soon, although that might be the real killer app here. Imagine controlling a Media Center PC like this, or doing interactive slideshows at your house.

This is some exciting technology, and I'm really interested to see how people react to it. I'm not going to go out and say that it's going to change the world (remember the Segway hype?), but it's innovative and intriguing, and nice to see from a company who we tend to criticize for a lack of those traits. —Mark McClusky

Bonus: Check out our Milan coverage elsewhere on Gadget Lab and on Epicenter. And scroll down for a couple of videos showing the tabletop in action.

Friday, April 11, 2008


Mac OS is the trademarked name for a series of graphical user interface-based operating systems developed by Apple Inc. (formerly Apple Computer, Inc.) for their Macintosh line of computer systems. The Macintosh user experience is credited with popularizing the graphical user interface. The original form of what Apple would later name the "Mac OS" was the integral and unnamed system software first introduced in 1984 with the original Macintosh, usually referred to simply as the System software.

Apple deliberately downplayed the existence of the operating system in the early years of the Macintosh to help make the machine appear more user-friendly and to distance it from other operating systems such as MS-DOS, which were portrayed as arcane and technically challenging. Much of this early system software was held in ROM, with updates typically provided free of charge by Apple dealers on floppy disk. As increasing disk storage capacity and performance gradually eliminated the need for fixing much of an advanced GUI operating system in ROM, Apple explored cloning while positioning major operating system upgrades as separate revenue-generating products, first with System 7 and System 7.5, then with Mac OS 7.6 in 1997.

Earlier versions of the Mac OS were compatible only with Motorola 68000-based Macintoshes. As Apple introduced computers with PowerPC hardware, the OS was upgraded to support this architecture as well. Mac OS X, which has superseded the "Classic" Mac OS, is compatible with both PowerPC and Intel processors.


The early Macintosh operating system initially consisted of two pieces of software, called "System" and "Finder", each with its own version number.[1] System 7.5.1 was the first to include the Mac OS logo (a variation on the original "Happy Mac" smiley face Finder startup icon), and Mac OS 7.6 was the first to be named "Mac OS" (to ensure that users would still identify it with Apple, even when used in "clones" from other companies).

Until the advent of the later PowerPC G3-based systems, significant parts of the system were stored in physical ROM on the motherboard. The initial purpose of this was to avoid using up the limited storage of floppy disks on system support, given that the early Macs had no hard disk. (Only one model of Mac was ever actually bootable using the ROM alone, the 1991 Mac Classic model.) This architecture also allowed for a completely graphical OS interface at the lowest level without the need for a text-only console or command-line mode. A fatal software error, or even a low-level hardware error discovered during system startup (such as finding no functioning disk drives), was communicated to the user graphically using some combination of icons, alert box windows, buttons, a mouse pointer, and the distinctive Chicago bitmap font. Mac OS depended on this core system software in ROM on the motherboard, a fact which later helped to ensure that only Apple computers or licensed clones (with the copyright-protected ROMs from Apple) could run Mac OS.

The Mac OS can be divided into two families of operating systems:

  • "Classic" Mac OS, the system which shipped with the first Macintosh in 1984 and its descendants, culminating with Mac OS 9.
  • The newer Mac OS X (the "X" refers to the Roman numeral, ten). Mac OS X incorporates elements of OpenStep (thus also BSD Unix and Mach) and Mac OS 9. Its low-level BSD-based foundation, Darwin, is free software/open source software.

"Classic" Mac OS (1984-2001)

Original 1984 Macintosh desktop
Original 1984 Macintosh desktop
Main article: Mac OS history

The "classic" Mac OS is characterized by its total lack of a command line; it is a completely graphical operating system. Heralded for its ease of use and its cooperative multitasking, it was criticized for its very limited memory management, lack of protected memory, and susceptibility to conflicts among operating system "extensions" that provide additional functionality (such as networking) or support for a particular device. Some extensions may not work properly together, or work only when loaded in a particular order. Troubleshooting Mac OS extensions can be a time-consuming process of trial and error.

The Macintosh originally used the Macintosh File System (MFS), a flat file system with only one level of folders. This was quickly replaced in 1985 by the Hierarchical File System (HFS), which had a true directory tree. Both file systems are otherwise compatible.

Extensions Manager under Mac OS 9
Extensions Manager under Mac OS 9

Most file systems used with DOS, Unix, or other operating systems treat a file as simply a sequence of bytes, requiring an application to know which bytes represented what type of information. By contrast, MFS and HFS gave files two different "forks". The data fork contained the same sort of information as other file systems, such as the text of a document or the bitmaps of an image file. The resource fork contained other structured data such as menu definitions, graphics, sounds, or code segments. A file might consist only of resources with an empty data fork, or only a data fork with no resource fork. A text file could contain its text in the data fork and styling information in the resource fork, so that an application which didn't recognize the styling information could still read the raw text. On the other hand, these forks provided a challenge to interoperability with other operating systems; copying a file from a Mac to a non-Mac system would strip it of its resource fork.

The Classic OS is still supported and Classic Applications Support was shipped in addition to OS X with PowerPC (but not Intel) Macs until early 2006. However, Intel-based Macintoshes cannot run the Classic system or applications, nor can PowerPC models that have been upgraded to Mac OS 10.5 Leopard.

Mac OS X (2000-present)

Main article: Mac OS X

Mac OS X brought Unix-style memory management and pre-emptive multitasking to the Mac platform. It is based on the Mach kernel and the BSD implementation of UNIX, which were incorporated into NeXTSTEP, the object-oriented operating system developed by Steve Jobs' NeXT company. The new memory management system allowed more programs to run at once and virtually eliminated the possibility of one program crashing another. It is also the second Macintosh operating system to include a command line (the first is the now-discontinued A/UX, which supported classic Mac OS applications on top of a UNIX kernel), although it is never seen unless the user launches a terminal emulator.

However, since these new features put higher demands on system resources, Mac OS X only officially supported the PowerPC G3 and newer processors, and now has even higher requirements (the additional requirement of built-in USB (10.3) and later FireWire (10.4)). Even then, it runs somewhat slowly on older G3 systems for many purposes.

For over three years now, Mac OS X has gotten faster with every release — and not just "faster in the experience of most end users", but faster on the same hardware. This trend is unheard of among contemporary desktop operating systems.[2]

PowerPC builds of Mac OS X include a compatibility layer for running older Mac applications, the Classic Environment. This runs a full copy of the older Mac OS, version 9.1 or later, in a Mac OS X process. PowerPC-based Macs shipped with OS 9.2 as well as OS X. OS 9.2 had to be installed by the user — it was not installed by default on hardware revisions released after the release of Mac OS X 10.4. Most well-written "classic" applications function properly under this environment, but compatibility is only assured if the software was written to be unaware of the actual hardware, and to interact solely with the operating system. The Classic Environment is not available on Intel-based Macintoshes due to the incompatibility of Mac OS 9 with the x86 hardware, and was removed completely on Mac OS X 10.5.

Users of the original Mac OS generally upgraded to Mac OS X, but many criticized it as being more difficult and less user-friendly than the original Mac OS, for the lack of certain features that had not been re-implemented in the new OS, or for being slower on the same hardware (especially older hardware), or other, sometimes serious incompatibilities with the older OS. Because drivers (for printers, scanners, tablets, etc.) written for the older Mac OS are not compatible with Mac OS X, and due to the lack of OS X support for older Apple machines, a significant number of Macintosh users have still continued using the older classic Mac OS. But by 2005, it has been reported that almost all users of systems capable of running Mac OS X are doing so, with only a small percentage still running the classic Mac OS.[citation needed]

In June 2005, Steve Jobs announced at the Worldwide Developers Conference keynote that Apple computers would be transitioning from PowerPC to Intel processors. At the same conference, Jobs announced Developer Transition Kits that included beta versions of Apple software including Mac OS X that developers could use to test their applications as they ported them to run on Intel-powered Macs. In January 2006, Apple released the first Macintosh computers with Intel processors, an iMac and the MacBook Pro, and in February 2006, Apple released a Mac mini with an Intel Core Solo and Duo processor. On May 16, 2006, Apple released the MacBook, before completing the Intel transition on August 7 with the Mac Pro. To ease the transition for early buyers of the new machines, Intel-based Macs include an emulation technology called Rosetta, which allows them to run (at reduced speed) pre-existing Mac OS X native application software which was compiled only for PowerPC-based Macintoshes.

Saturday, March 15, 2008



The Shadow robot hand system holding a lightbulb.

Robotics Portal

Robotics is the science and technology of robots, their design, manufacture, and application.[1] Robotics requires a working knowledge of electronics, mechanics and software, and is usually accompanied by a large working knowledge of many subjects.[2] A person working in the field is a roboticist.

Although the appearance and capabilities of robots vary vastly, all robots share the features of a mechanical, movable structure under some form of autonomous control. The structure of a robot is usually mostly mechanical and can be called a kinematic chain (its functionality being akin to the skeleton of the human body). The chain is formed of links (its bones), actuators (its muscles) and joints which can allow one or more degrees of freedom. Most contemporary robots use open serial chains in which each link connects the one before to the one after it. These robots are called serial robots and often resemble the human arm. Some robots, such as the Stewart platform, use closed parallel kinematic chains. Other structures, such as those that mimic the mechanical structure of humans, various animals and insects, are comparatively rare. However, the development and use of such structures in robots is an active area of research (e.g. biomechanics). Robots used as manipulators have an end effector mounted on the last link. This end effector can be anything from a welding device to a mechanical hand used to manipulate the environment.


The word robotics was first used in print by Isaac Asimov, in his science fiction short story "Runaround", published in March 1942 in Astounding Science Fiction.[3] While it was based on the word "robot" coined by science fiction author Karel Čapek, Asimov was unaware that he was coining a new term. The design of electrical devices is called electronics, so the design of robots is called robotics.[4] Before the coining of the term, however, there was interest in ideas similar to robotics (namely automata and androids) dating as far back as the 8th or 7th century BC. In the Iliad, the god Hephaestus made talking handmaidens out of gold.[5] Archytas of Tarentum is credited with creating a mechanical Pigeon in 400 BC.[6] Robots are used in industrial, military, exploration, home making, and academic and research applications.[7]

Components of robots


A robot leg, powered by Air Muscles.

The actuators are the 'muscles' of a robot; the parts which convert stored energy into movement. By far the most popular actuators are electric motors, but there are many others, some of which are powered by electricity, while others use chemicals, or compressed air.

  • Motors: By far the vast majority of robots use electric motors, of which there are several kinds. DC motors, which are familiar to many people, spin rapidly when an electric current is passed through them. They will spin backwards if the current is made to flow in the other direction.
  • Stepper Motors: As the name suggests, stepper motors do not spin freely like DC motors, they rotate in steps of a few degrees at a time, under the command of a controller. This makes them easier to control, as the controller knows exactly how far they have rotated, without having to use a sensor. Therefore they are used on many robots and CNC machining centres.
  • Piezo Motors: A recent alternative to DC motors are piezo motors, also known as ultrasonic motors. These work on a fundamentally different principle, whereby tiny piezoceramic legs, vibrating many thousands of times per second, walk the motor round in a circle or a straight line.[8] The advantages of these motors are incredible nanometre resolution, speed and available force for their size.[9] These motors are already available commercially, and being used on some robots.[10][11]
  • Air Muscles: The air muscle is a simple yet powerful device for providing a pulling force. When inflated with compressed air, it contracts by up to 40% of its original length. The key to its behaviour is the braiding visible around the outside, which forces the muscle to be either long and thin, or short and fat. Since it behaves in a very similar way to a biological muscle, it can be used to construct robots with a similar muscle/skeleton system to an animal.[12] For example, the Shadow robot hand uses 40 air muscles to power its 24 joints.
  • Electroactive Polymers: These are a class of plastics which change shape in response to electrical stimulation.[13] They can be designed so that they bend, stretch or contract, but so far there are no EAPs suitable for commercial robots, as they tend to have low efficiency or are not robust.[14] Indeed, all of the entrants in a recent competition to build EAP powered arm wrestling robots, were beaten by a 17 year old girl.[15] However, they are expected to improve in the future, where they may be useful for microrobotic applications.[16]
  • Elastic nanotubes are a promising, early-stage experimental technology. The absence of defects in nanotubes enables these filaments to deform elastically by several percent, with energy storage levels of perhaps 10J per cu. cm for metal nanotubes. Human biceps could be replaced with an 8mm diameter wire of this material. Such compact "muscle" might allow future robots to outrun and outjump humans. [17]


Robots which must work in the real world require some way to manipulate objects; pick up, modify, destroy or otherwise have an effect. Thus the 'hands' of a robot are often referred to as end effectors[18], while the arm is referred to as a manipulator.[19] Most robot arms have replacable effectors, each allowing them to perform some small range of tasks. Some have a fixed manipulator which cannot be replaced, while a few have one very general purpose manipulator, for example a humanoid hand.

A simple gripper
  • Grippers: A common effector is the gripper. In its simplest manifestation it consists of just two fingers which can open and close to pick up and let go of a range of small objects. See End effectors [1].
  • Vacuum Grippers: Pick and place robots for electronic components and for large objects like car windscreens, will often use very simple vacuum grippers. These are very simple astrictive devices, but can hold very large loads provided the prehension surface is smooth enough to ensure suction.
  • General purpose effectors: Some advanced robots are beginning to use fully humanoid hands, like the Shadow Hand (right), or the Schunk hand.[20] These highly dexterous manipulators, with as many as 20 degrees of freedom and hundreds of tactile sensors[21] can be difficult to control. The computer must consider a great deal of information, and decide on the best way to manipulate an object from many possibilities.

For the definitive guide to all forms of robot endeffectors, their design and usage consult the book "Robot Grippers" [22].


Rolling Robots

Segway in the Robot museum in Nagoya.

For simplicity, most mobile robots have four wheels. However, some researchers have tried to create more complex wheeled robots, with only one or two wheels.

  • Two-wheeled balancing: While the Segway is not commonly thought of as a robot, it can be thought of as a component of a robot. Several real robots do use a similar dynamic balancing algorithm, and NASA's Robonaut has been mounted on a Segway.[23]
  • Ballbot: Carnegie Mellon University researchers have developed a new type of mobile robot that balances on a ball instead of legs or wheels. "Ballbot" is a self-contained, battery-operated, omnidirectional robot that balances dynamically on a single urethane-coated metal sphere. It weighs 95 pounds and is the approximate height and width of a person. Because of its long, thin shape and ability to maneuver in tight spaces, it has the potential to function better than current robots can in environments with people.[24]
  • Track Robot: Another type of rolling robot is one that has tracks, like NASA's Urban Robot, Urbie. [25]

Walking Robots

iCub robot, designed by the RobotCub Consortium
Walking is a difficult and dynamic problem to solve. Several robots have been made which can walk reliably on two legs, however none have yet been made which are as robust as a human. Typically, these robots can walk well on flat floors, and can occasionally walk up stairs. None can walk over rocky, uneven terrain. Some of the methods which have been tried are:
  • Zero Moment Point (ZMP) Technique: is the algorithm used by robots such as Honda's ASIMO. The robot's onboard computer tries to keep the total inertial forces (the combination of earth's gravity and the acceleration and deceleration of walking), exactly opposed by the floor reaction force (the force of the floor pushing back on the robot's foot). In this way, the two forces cancel out, leaving no moment (force causing the robot to rotate and fall over).[26] However, this is not exactly how a human walks, and the difference is quite apparent to human observers, some of whom have pointed out that ASIMO walks as if it needs the lavatory.[27][28][29] ASIMO's walking algorithm is not static, and some dynamic balancing is used (See below). However, it still requires a smooth surface to walk on.
  • Hopping: Several robots, built in the 1980s by Marc Raibert at the MIT Leg Laboratory, successfully demonstrated very dynamic walking. Initially, a robot with only one leg, and a very small foot, could stay upright simply by hopping. The movement is the same as that of a person on a pogo stick. As the robot falls to one side, it would jump slightly in that direction, in order to catch itself.[30] Soon, the algorithm was generalised to two and four legs. A bipedal robot was demonstrated running and even performing somersaults.[31] A quadruped was also demonstrated which could trot, run, pace and bound.[32] For a full list of these robots, see the MIT Leg Lab Robots page.
  • Dynamic Balancing: A more advanced way for a robot to walk is by using a dynamic balancing algorithm, which is potentially more robust than the Zero Moment Point technique, as it constantly monitors the robot's motion, and places the feet in order to main stability.[33] This technique was recently demonstrated by Anybots' Dexter Robot,[34] which is so stable, it can even jump.[35]
  • Passive Dynamics: Perhaps the most promising approach being taken is to use the momentum of swinging limbs for greater efficiency. It has been shown that totally unpowered humanoid mechanisms can walk down a gentle slope, using only gravity to propel themselves. Using this technique, a robot need only supply a small amount of motor power to walk along a flat surface or a little more to walk up a hill. This technique promises to make walking robots at least ten times more efficient than ZMP walkers, like ASIMO.[36][37]

Wednesday, February 27, 2008

History of AMD

Advanced Micro Devices, Inc. (abbreviated AMD; NYSE: AMD) is an American company that manufactures semiconductors. It is based in Sunnyvale, California and was founded in 1969 by a group of former executives from Fairchild Semiconductor, including Jerry Sanders, III, Ed Turney, John Carey, Sven Simonsen, Jack Gifford and three members from Gifford's team, Frank Botte, Jim Giles, and Larry Stenger. The current chairman and chief executive officer is Dr. Héctor Ruiz and the current president and chief operating officer is Dirk Meyer.

AMD is the world's second-largest supplier of computer processors based on the x86 architecture, and the third-largest supplier of graphics cards and graphics processing units (GPUs), after taking control of ATI Technologies in 2006. AMD also owns a 21% share of Spansion, a supplier of non-volatile flash memory. In 2007 AMD ranked eleventh among semiconductor manufacturers.[1]

General history

Early AMD 8080 Processor (AMD AM9080ADC / C8080A), 1977
Early AMD 8080 Processor (AMD AM9080ADC / C8080A), 1977

AMD started as a producer of logic chips in 1969, then entered the RAM chip business in 1975. That same year, it introduced a reverse-engineered clone of the Intel 8080 microprocessor. During this period, AMD also designed and produced a series of bit-slice processor elements (Am2900, Am29116, Am293xx) which were used in various minicomputer designs.

During this time, AMD attempted to embrace the perceived shift towards RISC with their own AMD 29K processor, and they attempted to diversify into graphics and audio devices as well as EPROM memory. It had some success in the mid-80s with the AMD7910 and AMD7911 "World Chip" FSK modem, one of the first multistandard devices that covered both Bell and CCITT tones at up to 1200 baud half duplex or 300/300 full duplex. While the AMD 29K survived as an embedded processor and AMD spinoff Spansion continues to make industry leading flash memory, AMD was not as successful with its other endeavors. AMD decided to switch gears and concentrate solely on Intel-compatible microprocessors and flash memory. This put them in direct competition with Intel for x86 compatible processors and their flash memory secondary markets.

It has been reported in December 2006 that AMD along with its main rival in the graphics industry nVidia, received subpoenas from the Justice Department regarding possible antitrust violations in the graphics card industry, including the act of fixing prices.[2]

Litigation with Intel

AMD has a long history of litigation with former partner and x86 creator Intel.[3][4][5]

  • In 1986 Intel broke an agreement it had with AMD to allow them to produce Intel's micro-chips for IBM; AMD filed for arbitration in 1987 and the arbitrator decided in AMD's favor in 1992. Intel disputed this, and the case ended up in the Supreme Court of California. In 1994, that court upheld the arbitrator's decision and awarded damages for breach of contract.
  • In 1990, Intel brought a copyright infringement action alleging illegal use of its 287 microcode. The case ended in 1994 with a jury finding for AMD and its right to use Intel's microcode in its microprocessors through the 486 generation.
  • In 1997, Intel filed suit against AMD and Cyrix Corp. for misuse of the term MMX. AMD and Intel settled, with AMD acknowledging MMX as a trademark owned by Intel, and with Intel granting AMD rights to market the AMD K6 MMX processor.
  • In 2005, following an investigation, the Japan Federal Trade Commission found Intel guilty on a number of violations. On June 27, 2005, AMD won an antitrust suit against Intel in Japan, and on the same day, AMD filed a broad antitrust complaint against Intel in the U.S. Federal District Court in Delaware. The complaint alleges systematic use of secret rebates, special discounts, threats, and other means used by Intel to lock AMD processors out of the global market. Since the start of this action, The Court has issued subpoenas to major computer manufacturers including Dell, Microsoft, IBM, HP, Sony, and Toshiba.

Merger with ATI

AMD announced a merger with ATI Technologies on July 24, 2006. AMD paid $4.3 billion in cash and 58 million shares of its stock for a total of US$5.4 billion. The merger completed on October 25, 2006[6] and ATI is now part of AMD.

AMD x86 processors

Discontinued processors

8086, Am286, Am386, Am486, Am5x86

Main articles: Am286, Am386, Am486, and Am5x86

In February 1982, AMD signed a contract with Intel, becoming a licensed second-source manufacturer of 8086 and 8088 processors. IBM wanted to use the Intel 8088 in its IBM PC, but IBM's policy at the time was to require at least two sources for its chips. AMD later produced the Am286 under the same arrangement, but Intel canceled the agreement in 1986 and refused to convey technical details of the i386 part.

AMD challenged Intel's decision to cancel the agreement and won in arbitration, but Intel disputed this decision. A long legal dispute followed, ending in 1994 when the Supreme Court of California sided with AMD. Subsequent legal disputes centered on whether AMD had legal rights to use derivatives of Intel's microcode. In the face of uncertainty, AMD was forced to develop "clean room" versions of Intel code.

In 1991, AMD released the Am386, its clone of the Intel 386 processor. It took less than a year for the company to sell a million units. Later, the Am486 was used by a number of large OEMs, including Compaq, and proved popular. Another Am486-based product, the Am5x86, continued AMD's success as a low-price alternative. However, as product cycles shortened in the PC industry, the process of reverse engineering Intel's products became an ever less viable strategy for AMD.

[edit] K5, K6, Athlon (K7)

Main articles: AMD K5, AMD K6, and Athlon

AMD's first completely in-house x86 processor was the K5 which was launched in 1996.[7] The "K" was a reference to "Kryptonite", which from comic book lore, was the only substance that could harm Superman, with a clear reference to Intel, which dominated in the market at the time, as "Superman".[8]

In 1996, AMD purchased NexGen specifically for the rights to their Nx series of x86-compatible processors. AMD gave the NexGen design team their own building, left them alone, and gave them time and money to rework the Nx686. The result was the K6 processor, introduced in 1997.

The K7 was AMD's seventh generation x86 processor, making its debut on June 23, 1999, under the brand name Athlon. On October 9, 2001 the Athlon XP was released, followed by the Athlon XP with 512KB L2 Cache on February 10, 2003.[9]

Current processors

Athlon 64 (K8)

Main articles: Athlon 64 and Opteron

The K8 is a major revision of the K7 architecture, with the most notable features being the addition of a 64-bit extension to the x86 instruction set (officially called AMD64), the incorporation of an on-chip memory controller, and the implementation of an extremely high performance point-to-point interconnect called HyperTransport, as part of the Direct Connect Architecture. The technology was initially launched as the Opteron server-oriented processor.[10] Shortly thereafter it was incorporated into a product for desktop PCs, branded Athlon 64.[11]

Dual-core Athlon 64 X2
Main articles: Athlon 64 X2 and Opteron

AMD released the first dual core Opteron, an x86-based server CPU, on April 21, 2005.[12] The first desktop-based dual core processor family — the Athlon 64 X2 came a month later.[13]

Quad-core "Barcelona" die-shot
Quad-core "Barcelona" die-shot

In early May, AMD had abandoned the string "64" in its dual-core desktop product branding, becoming Athlon X2, downplaying the significance of 64-bit computing in its processors while upcoming updates involves some of the improvements to the microarchitecture, and a shift of target market from mainstream desktop systems to value dual-core desktop systems.

Phenom (K10)

Main articles: AMD K10, Opteron, and Phenom

The latest microprocessor architecture, also known as "AMD K10" is AMD's new microarchitecture. The "AMD K10" is the immediate successor to the AMD K8 microarchitecture. The first processors released on this architecture were introduced on September 10, 2007 consisting of nine quad-core Third Generation Opteron processors. K10 processors will come in dual, triple-core[14] and quad-core versions with all cores on one single die.

Future processors

Bulldozer and Bobcat

Main articles: Bulldozer and Bobcat

After the K10 architecture, AMD will move to a modular design methodology named "M-SPACE", where two new processor cores, codenamed "Bulldozer" and "Bobcat" will be released in the 2009 timeframe. While very little preliminary information exists even in AMD's Technology Analyst Day 2007, both cores are to be built from the ground up. The Bulldozer core focused on 10 watt to 100 watt products, with optimizations for performance-per-watt ratios and HPC applications and includes newly announced SSE5 instructions, while the Bobcat core will focus on 1 watt to 10 watt products, given that the core is a simplified x86 core to reduce power draw. Both of the cores will be able to incorporate full DirectX compatible GPU core(s) under the Fusion label, or as standalone products as a general purpose CPU.

AMD Fusion

Main article: AMD Fusion

After the merger between AMD and ATI, an initiative codenamed Fusion was announced that merges a CPU and GPU on one chip, including a minimum 16 lane PCI Express link to accommodate external PCI Express peripherals, thereby eliminating the requirement of a northbridge chip completely from the motherboard. It is expected to be released in 2009.

Other platforms and technologies

AMD chipsets

See also: Comparison of AMD chipsets

Before the launch of Athlon 64 processors in 2003, AMD designed chipsets for their processors spanning the K6 and K7 processor generations. The chipsets include the AMD-640, AMD-751 and the AMD-761 chipsets. The situation changed in 2003 with the release of Athlon 64 processors, and AMD chose not to further design its own chipsets for its desktop processors while opening the desktop platform to allow other firms to design chipsets. This is the "Open Platform Initiative". The initiative was proven to be a success, with many firms such as Nvidia, ATI, VIA and SiS developing their own chipset for Athlon 64 processors and later Athlon 64 X2 and Athlon 64 FX processors, including the Quad FX platform chipset from Nvidia.

The initiative went further with the release of Opteron server processors as AMD stopped the design of server chipsets in 2004 after releasing the AMD-8111 chipset, and again opened the server platform for firms to develop chipsets for Opteron processors. As of today, Nvidia and Broadcom are the sole designing firms of server chipsets for Opteron processors.

As the company completed the acquisition of ATI Technologies in 2006, the firm gained the ATI design team for chipsets which previously designed the Radeon Xpress 200 and the Radeon Xpress 3200 chipsets. AMD then renamed the chipsets for AMD processors under AMD branding (for instance, the CrossFire Xpress 3200 chipset was renamed as AMD 580X CrossFire chipset). In February 2007, AMD announced the first AMD-branded chipset since 2004 with the release of the AMD 690G chipset (previously under the development codename RS690), targeted at mainstream IGP computing. It was the industry's first to implement a HDMI 1.2 port on motherboards, shipping for more than a million units. While ATI had aimed at releasing an Intel IGP chipset, the plan was scrapped and the inventories of Radeon Xpress 1250 (codenamed RS600, sold under ATI brand) was sold to two OEMs, Abit and AsRock. Although AMD states the firm will still produce Intel chipsets, Intel had not granted the license of 1333 MHz FSB to ATI. Considering the rivalry between AMD and Intel, AMD is less likely to release more Intel chipset designs in the foreseeable future.

On November 15, 2007, AMD has announced a new chipset series portfolio, the AMD 7-Series chipsets, covering from enthusiast multi-graphics segment to value IGP segment, to replace the AMD 480/570/580 chipsets and AMD 690 series chipsets. Marking AMD's first enthusiast multi-graphics chipset. Discrete graphics chipsets were launched on November 15, 2007 as part of the codenamed Spider desktop platform, and IGP chipsets will be launched at a later time in Spring 2008 as part of the codenamed cartwheel platform.

AMD will also return to the server chipsets market with the next-generation AMD 800S series server chipsets, scheduled to be released in 2009 timeframe.

Friday, February 15, 2008


SONY:::(ソニー株式会社 Sonī Kabushiki-gaisha?) is a Japanese multinational conglomerate corporation and one of the world's largest media conglomerates with revenue of $70.303 billion (as of 2007) based in Minato, Tokyo.[1] Sony is one of the leading manufacturers of electronics, video, communications, video game consoles and information technology products for the consumer and professional markets, which developed the company into one of the world's richest companies.

Sony Corporation is the electronics business unit and the parent company of the Sony Group, which is engaged in business through its five operating segments — electronics, games, entertainment (motion pictures and music), financial services and other. These make Sony one of the most comprehensive entertainment companies in the world. Sony's principal business operations include Sony Corporation (Sony Electronics in the U.S.), Sony Pictures Entertainment, Sony Computer Entertainment, Sony BMG Music Entertainment, Sony Ericsson and Sony Financial Holdings. As a semiconductor maker, Sony is among the Worldwide Top 20 Semiconductor Sales Leaders. The company's slogan is Sony. Like no other.[3]


Masaru Ibuka, the co-founder of Sony
Masaru Ibuka, the co-founder of Sony

In 1945, after World War II, Masaru Ibuka started a radio repair shop in a bombed-out building in Tokyo.[4] The next year, he was joined by his colleague Akio Morita and they founded a company called Tokyo Tsushin Kogyo K.K.[5], which translates in English to Tokyo Telecommunications Engineering Corporation. The company built Japan's first tape recorder called the Type-G.[6]

In the early 1950s, Ibuka traveled in the United States and heard about Bell Labs' invention of the transistor.[7] He convinced Bell to license the transistor technology to his Japanese company. While most American companies were researching the transistor for its military applications, Ibuka looked to apply it to communications. Although the American companies Regency and Texas Instruments built the first transistor radios, it was Ibuka's company that made them commercially successful for the first time.

In August 1955, Tokyo Telecommunications Engineering released the Sony TR-55, Japan's first commercially produced transistor radio.[8] They followed up in December of the same year by releasing the Sony TR-72, a product that won favor both within Japan and in export markets, including Canada, Australia, the Netherlands and Germany. Featuring six transistors, push-pull output and greatly improved sound quality, the TR-72 continued to be a popular seller into the early sixties.

In May 1956, the company released the TR-6, which featured an innovative slim design and sound quality capable of rivaling portable tube radios. It was for the TR-6 that Sony first contracted "Atchan", a cartoon character created by Fuyuhiko Okabe, to become its advertising character. Now known as "Sony Boy", the character first appeared in a cartoon ad holding a TR-6 to his ear, but went on to represent the company in ads for a variety of products well into the mid-sixties.[9] The following year, 1957, Tokyo Telecommunications Engineering came out with the TR-63 model, then the smallest (112 × 71 × 32 mm) transistor radio in commercial production. It was a worldwide commercial success.[10]

University of Arizona professor Michael Brian Schiffer, Ph.D., says, "Sony was not first, but its transistor radio was the most successful. The TR-63 of 1957 cracked open the U.S. market and launched the new industry of consumer microelectronics." By the mid 1950s, American teens had begun buying portable transistor radios in huge numbers, helping to propel the fledgling industry from an estimated 100,000 units in 1955 to 5,000,000 units by the end of 1968. However, this huge growth in portable transistor radio sales that saw Sony rise to be the dominant player in the consumer electronics field[11] was not because of the consumers who had bought the earlier generation of tube radio consoles, but was driven by a distinctly new American phenomenon at the time called rock and roll.

Origin of name

A Sony building in Ginza, Tokyo
A Sony building in Ginza, Tokyo

When Kogyo was looking for a romanized name to use to market themselves, they strongly considered using their initials, TTK. The primary reason they did not is that the railway company Tokyo Kyuko was known as TKK.[12]. The company occasionally used the acronym "Totsuko" in Japan, but during his visit to the United States, Morita discovered that Americans had trouble pronouncing that name. Another early name that was tried out for a while was "Tokyo Teletech" until Morita discovered that there was an American company already using Teletech as a brand name.[13]

The name "Sony" was chosen for the brand as a mix of the Latin word Sony or son(us) and also a little boy sonny, which is the root of sonic and sound as well as familiar word of everybody called a boy in February 1955, and company name changed to Sony in January 1958. Morita pushed for a word that does not exist in any language so that they could claim the word "Sony" as their own (which paid off when they successfully sued a candy producer using the name, who claimed that "Sony" was an existing word in some language).[12]

At the time of the change, it was extremely unusual for a Japanese company to use Roman letters instead of kanji to spell its name. The move was not without opposition: TTK's principal bank at the time, Mitsui, had strong feelings about the name. They pushed for a name such as Sony Electronic Industries, or Sony Teletech. Akio Morita was firm, however, as he did not want the company name tied to any particular industry. Eventually, both Ibuka and Mitsui Bank's chairman gave their approval.[14]

New imac intel core 2 duo

Tuesday, January 29, 2008

Friday, January 18, 2008

intellectual capital

Intellectual capital is a term with various definitions in different theories of management and economics. Accordingly its only truly neutral definition is as a debate over economic "intangibles". Ambiguous combinations of human capital, instructional capital and individual capital employed in productive enterprise are usually what is meant by the term, when it is used to actually refer to a capital asset whose yield is intellectual rights.

Such use is rare, however, and the term rarely or never appears in accounting proper - it refers to a debate, and to the assumed capital base that creates intellectual property, rather than an auditable style of capital.

Perhaps due to their industry focus, the term "intellectual capital" is employed mostly by theorists in information technology, innovation research, technology transfer and other fields concerned primarily with technology, standards, and venture capital. It was particularly prevalent in 1995-2000 as theories proliferated to explain the "dotcom boom" and high valuations.


  • 1 A transitional term
  • 2 Individuals versus instructions
  • 3 Does "intellect" exclude social capital?
  • 4 Interoperability focus
  • 5 Meaningless without "social"?
  • 6 Subordinates persons and nature?
  • 7 Is "intellectual property" valid?
  • 8 Brand as more than instructional, individual, and social value
  • 9 Brand as tulip; brand as a deadly sin
  • 10 Brand, flag, label or fear?
  • 11 Brand as asset
  • 12 References
  • 13 See also
  • 14 External links

[edit] A transitional term

Because there is little agreement on how the "intellectual" is an asset, it is not clear if the term has a future in the field, or will be subsumed by other ideas, e.g. "brand capital" - social trust that exists only via owned instructions - an intangible.

Formerly of Fortune, and currently the editor of the Harvard Business Review, Thomas Stewart is the journalist of record on Intellectual Capital.[citation needed] He has been following its development since 1991. In his book Intellectual Capital (1997), Stewart introduces IC, offers a taxonomy for organizing it and makes the case for managing it.

Others have followed such line. For example, GartnerGroup in a series of research reports of 2001-2002 (see e.g. 1), and Nick Bontis of McMaster University in several academic papers published in the Journal of Intellectual Capital.2 Both approaches followed more or less Stewart's proposal, although with some variations: intellectual capital includes human capital (the talent base of the employees), "structural capital" (according to Bontis, "the non-human storehouses of information", while Gartner enlarges this to include other organizational knowledge) and "relational capital" (the knowledge embedded in business networks).

Baruch Lev documents "brand" as a new (seventh) form of capital. This seems to violate classical microeconomics basic model of the factors of production - and likely require major rethinking of microeconomics and political economy.

The anti-globalization movement and green economists seem to broadly share a critique of "brand" documented by Naomi Klein in her book "No Logo" - although from an economics viewpoint their proposals for mandatory labelling schemes and a retrenchment of national sovereignty (so called "brand versus flag" or "brand versus label" debates) seem to validate Lev's assumption that brand does in fact add genuine value: a flag, or a brand, or a label, economically, all signify social trust, albeit with different procedures of complaint, recourse, and enforcement.

Brand and intellectual capital debates are generally inseparable from larger debates on role of corporations and governments, and larger debates among anthropologists, primatologists and sociologists on imitation versus creativity in shaping human behavior. This article will avoid the larger political economy questions and deal with these only as required to explain the focus of intellectual capital theory, that being the relative valuation and balanced growth of: Also it can be a measure of how valuable a company's knowledge is.

[edit] Individuals versus instructions

Focusing where the theories agree, there is no clear standard beyond the agreement that individuals and instructions contribute very different value in microeconomics. The question of the contribution of intellectual capital that combines the two in a process is more likely a matter of political economy, and difficult to separate from other issues of relative values of capital across a whole economy or society.

This debate certainly did not begin with Don Sheelan and Naomi Klein - the roots of it can be seen as far back as John Stuart Mill and David Ricardo in the very origins of political economy. In the 20th century, the critiques of Ayn Rand and Richard Stallman are seen by some as representing a spectrum in which all instructional value is derived from individuals, or individuals are seen primarily as valued in terms of the instructional capital which they create - clearly political positions reflecting different attitudes to capitalism, rather than an analysis of how individuals and instructions actually interact. Or, some critics argue, how either affect society or nature.

[edit] Does "intellect" exclude social capital?

The term "intellectual capital" seems almost exclusively used by theorists seeking ways to make systems or groups cooperate without relying on pre-existing social trust - research into measuring reputation, zero knowledge protocols, and authentication very often overlaps with the economic theories involved.

A broader (and far more standard and common) term, "human capital", assumes implicitly that social capital, which is (vaguely) inter-personal or cultural trust, must be involved in all such processes.

[edit] Interoperability focus

Excluding all informal "trust" from human productive activity, presumably, leaves only "intellectual" processes that combine individual creativity and widespread instructional imitation to create value for the enterprise and/or society, e.g. such processes as setting a standard for a programming language - requiring substantial innovation and experiment but ultimately serving no purpose unless the innovation is uniform and widespread enough to enable "interoperability".

The best example may well be the Internet Protocol, or "IP", which was a simple networking protocol originally used to link US defense research sites together. The many individual contributions and extensions were disciplined by a deliberately distrusting process, rather like a court procedure, that included among other things ejecting vendors from commercial trade show space if their equipment failed to interoperate perfectly with that of all others.

Another example is the Bank for International Settlements which seeks to "hardwire the credit culture" of the global central banks to facilitate instant clearing of transactions - by standardizing the trust measures used by banks worldwide.

[edit] Meaningless without "social"?

However, it is hard to imagine how such enterprises could have succeeded without the social capital of the United States government and NATO defense establishment itself[attribution needed] - a common argument among theorists of human capital who hold that it isn't usually sensible to try to separate the role of trust in intellectual work. And who also often oppose the military-industrial complex that they see as funding such work and imposing its values on it.

It is sometimes argued that theorists of intellectual capital, by assuming individual and instructional contributions are inseparable or both equally valuable or not reliant on social trust at all but rather vague "intellect", are deliberately forcing their political economy to conform to ideals of neoclassical economics or even libertarian parties. Denying creative contributions of labor has however been a common theme in economics - treating labor as one of three factors of production was rejected by Marx who reframed them (minus labor) as his "means of production".

[edit] Subordinates persons and nature?

Other objections are heard in the anti-globalization movement which objects to, among other things, global use of patent instruments to "protect" instructional capital at the expense of individual capital (persons) or natural capital (ecologies) - which remain fixed in one nation each. Vandana Shiva's popular account of this process gave rise to the term "biopiracy" to describe corporate patents on plants long used by indigenous peoples prior to colonization. She describes the process of colonial "discovery" as applying to land, to labor, and even to knowledge.

Another critique by meme theorist Liane Gabora argues that creativity is vastly underestimated, and imitation vastly overestimated, in most of our economics. In line with feminist economists like Marilyn Waring, Gabora notes that human mothers are immensely creative in raising children, and human artists often invent new technologies while pursuing no clear goal - but neither activity is measured. While this validates the idea that social trust can be minimal in some such processes, it casts doubt upon the notion that economic activity can ever be understood without a deeper understanding of creativity and how it expresses itself economically as individual capital.

[edit] Is "intellectual property" valid?

Although the theory came long after the instruments, as with all other economics, there are instruments of patent and copyright protection in all countries of the world, and they are increasingly uniform.

There is a substantial literature of intellectual property law and how these protections and instruments further or inhibit productive enterprise.

Controversy seems to surround the question of whether instruments designed primarily to protect rights in individual creativity, e.g. copyright, are appropriate as a means to protect broader shared instructions, e.g. software. Also, whether instruments designed primarily to protect rights in inventions, e.g. patent, can reasonably describe social constructions such as software.

Liberal economists are strong critics of what they call exclusivity rights.

[edit] Brand as more than instructional, individual, and social value

Another debate focuses on the role of trademark and brand name to demark reliable instructions versus membership in a user community - and who and how can someone own the rights to a common phrase or a community's name.

Baruch Lev holds that neither the idea of instructional capital, nor individual capital, nor social capital as understood in sociology sufficiently describes the trust placed by consumers or the standards upheld by the enterprise - that a distinction and separate "brand capital" exists.

Of course, who creates this value, who owns it, and who is liable for harms it does, is the core of the "label versus brand versus flag" policy debate.

[edit] Brand as tulip; brand as a deadly sin

It is quite difficult to separate the analysis from the advocacy on this topic, and from analysis of other topics such as ethical investing or moral purchasing, which inherently assume that certain responsibilities accrue to the consumer, the direct supplier, indirect suppliers, or others involved in the regulatory, inspection, enforcement and protection process.

Many critics of pro-"brand" views hold that "brand" is merely an aspect of firm-specific human capital, specifically, firm-specific social capital, and that it cannot be a means of production nor an effective means of protection since it often disappears very quickly. They point to such cases of sudden brand name devaluation, e.g. the "dotcom boom", Enron and Arthur Andersen in late 2001 and early 2002, and far earlier phenomena such as the Dutch Tulip Boom, as evidence that enterprises whose financial market valuations exceed the actual instructional reliability, individual creativity, and social trust combinations vested in the firm itself, are quickly dragged back down to reasonable valuations based on more traditional criteria associated with financial and infrastructural holdings.

If this view is correct, it has major implications - brand becomes, as Naomi Klein claims, "No Logo", but simply a trigger for "fear" or "lust" or "greed" or other sins, with no lasting persistent value. Brand may not even validate that it represents responsible individuals, reliable instructions, and trust in a social structure in which they all cooperate. In effect, mandatory labeling could do everything that a brand could do, and corporations may be, as David Korten claims, mere "responsibility evasion mechanisms".

[edit] Brand, flag, label or fear?

All of these positions seem to validate the basic analysis that some kind of intellectual value (beyond the emotional assurances described in social capital theory) accrues to some tag, be it a brand, a flag, or a label. Indeed there is evidence that the three may be interchangeable economically, and merely provide a more specific or tangible assurance to the purchaser:

Proponents of anti-"brand" views often see mandatory labeling backed by national sovereignty (label and flag, with "no logo") as a way to replace or drastically reduce the trust placed in corporate brand names. Most of the strongest opponents of corporate identity are also proponents of such mandatory labeling schemes, e.g. in textiles, and on organic foods. They have had substantial success labeling genetic modification, especially in the EU, and spawned the fields of biosafety and (along with more focused nonproliferation and biological warfare concerns) the newer field of biosecurity.

The nonproliferation debate arises between promoting and inhibiting certain types of trade in intellect - most notably those "dual-use technologies" that can both add and seriously subtract economic value, when expressed in tools or as weapons. These concerns were highlighted when US President George W. Bush in 2001 rejected the Biological Weapons Convention - in part to satisfy biotechnology corporations who feared that arms control inspectors would potentially steal "intellectual property".

The brand, flag, label debate may need to be reframed in terms of simple fear: where did it come from? Will it work? Who's got another one?

[edit] Brand as asset

Whether flags, brands, labels or simple fear dominate economic decisions, it seems that the underlying theories of intellectual capital and of human capital don't explain them. When attached to "capital" as prefixes, the terms "intellectual", "knowledge" and "human" often conceal more than their use can reveal. Thus the terms "intellectual capital", "knowledge capital" and "human capital" more properly describe debates, not assets, as internally generated assets do not appear on a balance sheet, however International Financial Reporting Standard 3 on Business Combinations requires acquired intangible assets to be accounted for during the purchase price allocation exercise. They produce neat abstractions but so far poorly explain what actually occurs in the biologically real world: individuals buying in a social setting based on instructions.

So far, the more specific terms "individual", "instructional" and "social" from human development theory, have been preferred in Wikipedia as adjectives describing classes of capital. In part this is because these terms have definitions that arise from academic categories and practices rather than faddish marketing or management theories. There are standards for assigning value to these, e.g. the UN Human Development Index which literally ranks flags (of countries) for quality of life.

Extending such standards to labels (via mandatory labelling) and applying them positively in brand management, e.g. positioning a brand for appeal to an ethical minority, is increasingly common. Projects by Consumerium and AdBusters seek to make comprehensive outcomes more important in buying decisions. This in turn is part of a trend towards more moral purchasing.

When viewed as an asset, then, a brand is simple social capital that may have an increasing amount of instructional capital attached to satisfy an ever-rising demand for more information about product origin, production and distribution.