Search

Custom Search

Wednesday, October 21, 2009

Wireless USB

Wireless USB is a short-range, high-bandwidth wireless radio communication protocol created by the Wireless USB Promoter Group. Wireless USB is sometimes abbreviated as "WUSB", although the USB Implementers Forum discourages this practice and instead prefers to call the technology "Certified Wireless USB" to differentiate it from competitors. Wireless USB is based on the WiMedia Alliance's Ultra-WideBand (UWB) common radio platform, which is capable of sending 480 Mbit/s at distances up to 3 meters and 110 Mbit/s at up to 10 meters. It was designed to operate in the 3.1 to 10.6 GHz frequency range, although local regulatory policies may restrict the legal operating range for any given country.

Wireless USB is used in game controllers, printers, scanners, digital cameras, MP3 players, hard disks and flash drives. Kensington released a Wireless USB universal docking station in August, 2008. It is also suitable for transferring parallel video streams, while utilizing the Wireless USB over UWB bandwidth.

The Wireless USB Promoter Group was formed in February 2004 to define the Wireless USB specification. The group consists of Agere Systems (now merged with LSI Corporation), Hewlett-Packard, Intel, Microsoft, NEC Corporation, Philips and Samsung.

In May 2005, the Wireless USB Promoter Group announced the completion of the Wireless USB specification.

In June 2006, five companies showed the first multi-vendor interoperability demonstration of Wireless USB. A laptop with an Intel host adapter using an Alereon PHY was used to transfer high definition video from a Philips wireless semiconductor solution with a Realtek PHY, all using Microsoft Windows XP drivers developed for Wireless USB.

In October 2006 the FCC approved the first complete Host Wire Adapter (HWA) and Device Wire Adapter (DWA) wireless USB solution from WiQuest Communications for both outdoor and indoor use. The first retail product was shipped by IOGEAR using Alereon, Intel and NEC silicon in mid-2007. Around the same time, Belkin, Dell, Lenovo and D-Link began shipping products that incorporated WiQuest technology. These products included embedded cards in the notebook PCs or Hub/Adapter solutions for those PCs that do not currently include Wireless USB. In 2008, a new Wireless USB Docking Station from Kensington was made available through Dell. This product was unique as it was the first product on the market to support video and graphics over a USB connection, by using DisplayLink USB graphics technology. Kensington's Docking Station enables wireless connectivity between a notebook PC and an external monitor, speakers, and existing wired USB peripherals. Imation announced Q408 avaialbility of a new external Wireless HDD. Both of these products are based on WiQuest technology.

On March 16, 2009, the WiMedia Alliance announced it is entering into technology transfer agreements for the WiMedia Ultra-wideband (UWB) specifications. WiMedia will transfer all current and future specifications, including work on future high speed and power optimized implementations, to the Bluetooth Special Interest Group (SIG), Wireless USB Promoter Group and the USB Implementers Forum. After the successful completion of the technology transfer, marketing and related administrative items, the WiMedia Alliance will cease operations

About Satellite Internet

Satellites in geostationary orbits are able to relay broadband data from the satellite company to each customer. Satellite Internet is usually among the most expensive ways of gaining broadband Internet access, but in rural areas it may only compete with cellular broadband. However, costs have been coming down in recent years to the point that it is becoming more competitive with other broadband options.

Broadband satellite Internet also has a high latency problem is due to the signal having to travel to an altitude of 35,786 km (22,236 mi) above sea level (from the equator) out into space to a satellite in geostationary orbit and back to Earth again.. The signal delay can be as much as 500 milliseconds to 900 milliseconds, which makes this service unsuitable for applications requiring real-time user input such as certain multiplayer Internet games and first-person shooters played over the connection. Despite this, it is still possible for many games to be played, but the scope is limited to real-time strategy or turn-based games. The functionality of live interactive access to a distant computer can also be subject to the problems caused by high latency. These problems are more than tolerable for just basic email access and web browsing and in most cases are barely noticeable.

For geostationary satellites there is no way to eliminate this problem. The delay is primarily due to the great distances travelled which, even at the speed of light (about 300,000 km/second or 186,000 miles per second), can be significant. Even if all other signalling delays could be eliminated it still takes electromagnetic radio waves about 500 milliseconds, or half a second, to travel from ground level to the satellite and back to the ground, a total of over 71,400 km (44,366 mi) to travel from the source to the destination, and over 143,000 km (88,856 mi) for a round trip (user to ISP, and then back to user—with zero network delays). Factoring in other normal delays from network sources gives a typical one-way connection latency of 500–700 ms from the user to the ISP, or about 1,000–1,400 milliseconds latency for the total Round Trip Time (RTT) back to the user. This is far worse than most dial-up modem users' experience, at typically only 150–200 ms total latency.

Medium Earth Orbit (MEO) and Low Earth Orbit (LEO) satellites however do not have such great delays. The current LEO constellations of Globalstar and Iridium satellites have delays of less than 40 ms round trip, but their throughput is less than broadband at 64 kbps per channel. The Globalstar constellation orbits 1,420 km above the earth and Iridium orbits at 670 km altitude. The proposed O3b Networks MEO constellation scheduled for deployment in 2010 would orbit at 8,062 km, with RTT latency of approximately 125 ms. The proposed new network is also designed for much higher throughput with links well in excess of 1 Gbps (Giga bits per second).

Most satellite Internet providers also have a FAP (Fair Access Policy). Perhaps one of the largest disadvantages of satellite Internet, these FAPs usually throttle a user's throughput to dial-up data rates after a certain "invisible wall" is hit (usually around 200 MB a day). This FAP usually lasts for 24 hours after the wall is hit, and a user's throughput is restored to whatever tier they paid for. This makes bandwidth-intensive activities nearly impossible to complete in a reasonable amount of time (examples include P2P and newsgroup binary downloading).

The European ASTRA2Connect system has a FAP based on a monthly limit of 2Gbyte of data downloaded, with download data rates reduced for the remainder of the month if the limit is exceeded.

Advantages

   1. True global broadband Internet access availability
   2. Mobile connection to the Internet (with some providers)

Disadvantages

   1. High latency compared to other broadband services, especially 2-way satellite service
   2. Unreliable: drop-outs are common during travel, inclement weather, and during sunspot activity
   3. The narrow-beam highly directional antenna must be accurately pointed to the satellite orbiting overhead
   4. The Fair Access Policy limits heavy usage, if applied by the service provider
   5. VPN use is discouraged, problematic, and/or restricted with satellite broadband, although available at a price
   6. One-way satellite service requires the use of a modem or other data uplink connection
   7. Satellite dishes are very large. Although most of them employ plastic to reduce weight, they are typically between 80 and 120 cm (30 to 48 inches) in diameter.

Broadband Internet Technology

Broadband Internet access, often shortened to just broadband, is a high data rate Internet access
typically contrasted with dial-up access using a 56k modem.

Dial-up modems are limited to a bitrate of less than 56 kbit/s (kilobits per second) and require the full use of a telephone line—whereas broadband technologies supply more than double this rate and generally without disrupting telephone use.

Although various minimum bandwidths have been used in definitions of broadband, ranging up from 64 kbit/s up to 2.0 Mbit/s, the 2006 OECD report is typical by defining broadband as having download data transfer rates equal to or faster than 256 kbit/s, while the United States (US) Federal Communications Commission (FCC) as of 2009, defines "Basic Broadband" as data transmission speeds exceeding 768 kilobits per second (Kbps), or 768,000 bits per second, in at least one direction: downstream (from the Internet to the user’s computer) or upstream (from the user’s computer to the Internet). The trend is to raise the threshold of the broadband definition as the marketplace rolls out faster services.
Data rates are defined in terms of maximum download because several common consumer broadband technologies such as ADSL are "asymmetric"—supporting much slower maximum upload data rate than download.

Broadband is often called "high-speed" Internet, because it usually has a high rate of data transmission. In general, any connection to the customer of 256 kbit/s (0.256 Mbit/s) or greater is more concisely considered broadband Internet. The International Telecommunication Union Standardization Sector (ITU-T) recommendation I.113 has defined broadband as a transmission capacity that is faster than primary rate ISDN, at 1.5 to 2 Mbit/s. The FCC definition of broadband is 768 kbit/s (0.8 Mbit/s). The Organization for Economic Co-operation and Development (OECD) has defined broadband as 256 kbit/s in at least one direction and this bit rate is the most common baseline that is marketed as "broadband" around the world. There is no specific bitrate defined by the industry, however, and "broadband" can mean lower-bitrate transmission methods. Some Internet Service Providers (ISPs) use this to their advantage in marketing lower-bitrate connections as broadband.

In practice, the advertised bandwidth is not always reliably available to the customer; ISPs often allow a greater number of subscribers than their backbone connection or neighborhood access network can handle, under the assumption that most users will not be using their full connection capacity very frequently. This aggregation strategy works more often than not, so users can typically burst to their full bandwidth most of the time; however, peer-to-peer (P2P) file sharing systems, often requiring extended durations of high bandwidth, stress these assumptions, and can cause major problems for ISPs who have excessively overbooked their capacity. For more on this topic, see traffic shaping. As takeup for these introductory products increases, telcos are starting to offer higher bit rate services. For existing connections, this most of the time simply involves reconfiguring the existing equipment at each end of the connection.

As the bandwidth delivered to end users increases, the market expects that video on demand services streamed over the Internet will become more popular, though at the present time such services generally require specialized networks. The data rates on most broadband services still do not suffice to provide good quality video, as MPEG-2 video requires about 6 Mbit/s for good results. Adequate video for some purposes becomes possible at lower data rates, with rates of 768 kbit/s and 384 kbit/s used for some video conferencing applications, and rates as low as 100 kbit/s used for videophones using H.264/MPEG-4 AVC. The MPEG-4 format delivers high-quality video at 2 Mbit/s, at the low end of cable modem and ADSL performance.

Increased bandwidth has already made an impact on newsgroups: postings to groups such as alt.binaries. have grown from JPEG files to entire CD and DVD images. According to NTL, the level of traffic on their network increased from a daily inbound news feed of 150 gigabytes of data per day and 1 terabyte of data out each day in 2001 to 500 gigabytes of data inbound and over 4 terabytes out each day in 2002.

The standard broadband technologies in most areas are DSL and cable modems. Newer technologies in use include VDSL and pushing optical fiber connections closer to the subscriber in both telephone and cable plants. Fiber-optic communication, while only recently being used in fiber to the premises and fiber to the curb schemes, has played a crucial role in enabling Broadband Internet access by making transmission of information over larger distances much more cost-effective than copper wire technology. In a few areas not served by cable or ADSL, community organizations have begun to install Wi-Fi networks, and in some cities and towns local governments are installing municipal Wi-Fi networks. As of 2006, broadband mobile Internet access has become available at the consumer level in some countries, using the HSDPA and EV-DO technologies. The newest technology being deployed for mobile and stationary broadband access is WiMAX.

One of the great challenges of broadband is to provide service to potential customers in areas of low population density, such as to farmers, ranchers, and small towns. In cities where the population density is high, it is easy for a service provider to recover equipment costs, but each rural customer may require expensive equipment to get connected.

Several rural broadband solutions exist, though each has its own pitfalls and limitations[clarification needed]. Some choices are better than others, but are dependent on how proactive the local phone company is about upgrading their rural technology.

Wireless Internet Service Provider (WISPs) are rapidly becoming a popular broadband option for rural areas.

Sunday, October 4, 2009

About Molecular Nanotechnology??????



Molecular nanotechnology, sometimes called molecular manufacturing, is a term given to the concept of engineered nanosystems (nanoscale machines) operating on the molecular scale. It is especially associated with the concept of a molecular assembler, a machine that can produce a desired structure or device atom-by-atom using the principles of mechanosynthesis. Manufacturing in the context of productive nanosystems is not related to, and should be clearly distinguished from, the conventional technologies used to manufacture nanomaterials such as carbon nanotubes and nanoparticles.


When the term "nanotechnology" was independently coined and popularized by Eric Drexler (who at the time was unaware of an earlier usage by Norio Taniguchi) it referred to a future manufacturing technology based on molecular machine systems. The premise was that molecular scale biological analogies of traditional machine components demonstrated molecular machines were possible: by the countless examples found in biology, it is known that sophisticated, stochastically optimised biological machines can be produced..

It is hoped that developments in nanotechnology will make possible their construction by some other means, perhaps using biomimetic principles. However, Drexler and other researchers have proposed that advanced nanotechnology, although perhaps initially implemented by biomimetic means, ultimately could be based on mechanical engineering principles, namely, a manufacturing technology based on the mechanical functionality of these components (such as gears, bearings, motors, and structural members) that would enable programmable, positional assembly to atomic specification. The physics and engineering performance of exemplar designs were analyzed in Drexler's book Nanosystems.

In general it is very difficult to assemble devices on the atomic scale, as all one has to position atoms are other atoms of comparable size and stickiness. Another view, put forth by Carlo Montemagno, is that future nanosystems will be hybrids of silicon technology and biological molecular machines. Yet another view, put forward by the late Richard Smalley, is that mechanosynthesis is impossible due to the difficulties in mechanically manipulating individual molecules.

This led to an exchange of letters in the ACS publication Chemical & Engineering News in 2003. Though biology clearly demonstrates that molecular machine systems are possible, non-biological molecular machines are today only in their infancy. Leaders in research on non-biological molecular machines are Dr. Alex Zettl and his colleagues at Lawrence Berkeley Laboratories and UC Berkeley. They have constructed at least three distinct molecular devices whose motion is controlled from the desktop with changing voltage: a nanotube nanomotor, a molecular actuator, and a nanoelectromechanical relaxation oscillator.

An experiment indicating that positional molecular assembly is possible was performed by Ho and Lee at Cornell University in 1999. They used a scanning tunneling microscope to move an individual carbon monoxide molecule (CO) to an individual iron atom (Fe) sitting on a flat silver crystal, and chemically bound the CO to the Fe by applying a voltage.

What Is Nanotechnology????

Nanotechnology, shortened to "nanotech", is the study of the control of matter on an atomic and molecular scale. Generally nanotechnology deals with structures of the size 100 nanometers or smaller, and involves developing materials or devices within that size. Nanotechnology is very diverse, ranging from extensions of conventional device physics, to completely new approaches based upon molecular self-assembly, to developing new materials with dimensions on the nanoscale, even to speculation on whether we can directly control matter on the atomic scale.


There has been much debate on the future of implications of nanotechnology. Nanotechnology has the potential to create many new materials and devices with wide-ranging applications, such as in medicine, electronics, and energy production. On the other hand, nanotechnology raises many of the same issues as with any introduction of new technology, including concerns about the toxicity and environmental impact of nanomaterials, and their potential effects on global economics, as well as speculation about various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.

The first use of the concepts in 'nano-technology' (but pre-dating use of that name) was in "There's Plenty of Room at the Bottom," a talk given by physicist Richard Feynman at an American Physical Society meeting at Caltech on December 29, 1959. Feynman described a process by which the ability to manipulate individual atoms and molecules might be developed, using one set of precise tools to build and operate another proportionally smaller set, and so on down to the needed scale. In the course of this, he noted, scaling issues would arise from the changing magnitude of various physical phenomena: gravity would become less important, surface tension and Van der Waals attraction would become more important, etc. This basic idea appears plausible, and exponential assembly enhances it with parallelism to produce a useful quantity of end products. The term "nanotechnology" was defined by Tokyo Science University Professor Norio Taniguchi in a 1974 paper as follows: "'Nano-technology' mainly consists of the processing of, separation, consolidation, and deformation of materials by one atom or by one molecule." In the 1980s the basic idea of this definition was explored in much more depth by Dr. K. Eric Drexler, who promoted the technological significance of nano-scale phenomena and devices through speeches and the books Engines of Creation: The Coming Era of Nanotechnology (1986) and Nanosystems: Molecular Machinery, Manufacturing, and Computation, and so the term acquired its current sense. Engines of Creation: The Coming Era of Nanotechnology is considered the first book on the topic of nanotechnology. Nanotechnology and nanoscience got started in the early 1980s with two major developments; the birth of cluster science and the invention of the scanning tunneling microscope (STM). This development led to the discovery of fullerenes in 1985 and carbon nanotubes a few years later. In another development, the synthesis and properties of semiconductor nanocrystals was studied; this led to a fast increasing number of metal and metal oxide nanoparticles and quantum dots. The atomic force microscope was invented six years after the STM was invented. In 2000, the United States National Nanotechnology Initiative was founded to coordinate Federal nanotechnology research and development.

One nanometer (nm) is one billionth, or 10−9, of a meter. By comparison, typical carbon-carbon bond lengths, or the spacing between these atoms in a molecule, are in the range 0.12–0.15 nm, and a DNA double-helix has a diameter around 2 nm. On the other hand, the smallest cellular life-forms, the bacteria of the genus Mycoplasma, are around 200 nm in length.


To put that scale in another context, the comparative size of a nanometer to a meter is the same as that of a marble to the size of the earth.Or another way of putting it: a nanometer is the amount a man's beard grows in the time it takes him to raise the razor to his face.

Two main approaches are used in nanotechnology. In the "bottom-up" approach, materials and devices are built from molecular components which assemble themselves chemically by principles of molecular recognition. In the "top-down" approach, nano-objects are constructed from larger entities without atomic-level control.

Areas of physics such as nanoelectronics, nanomechanics and nanophotonics have been evolved during the last decades to provide a basic scientific foundation of nanotechnology.

Modern synthetic chemistry has reached the point where it is possible to prepare small molecules to almost any structure. These methods are used today to produce a wide variety of useful chemicals such as pharmaceuticals or commercial polymers. This ability raises the question of extending this kind of control to the next-larger level, seeking methods to assemble these single molecules into supramolecular assemblies consisting of many molecules arranged in a well defined manner.


These approaches utilize the concepts of molecular self-assembly and/or supramolecular chemistry to automatically arrange themselves into some useful conformation through a bottom-up approach. The concept of molecular recognition is especially important: molecules can be designed so that a specific conformation or arrangement is favored due to non-covalent intermolecular forces. The Watson–Crick basepairing rules are a direct result of this, as is the specificity of an enzyme being targeted to a single substrate, or the specific folding of the protein itself. Thus, two or more components can be designed to be complementary and mutually attractive so that they make a more complex and useful whole.

Such bottom-up approaches should be able to produce devices in parallel and much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases. Most useful structures require complex and thermodynamically unlikely arrangements of atoms. Nevertheless, there are many examples of self-assembly based on molecular recognition in biology, most notably Watson–Crick basepairing and enzyme-substrate interactions. The challenge for nanotechnology is whether these principles can be used to engineer new constructs in addition to natural ones.

Saturday, October 3, 2009

What is EVDO?

EVDO is an acronym for "Evolution Data Only" or "Evolution Data Optimized" which is a standard for high speed wireless networks used for Broadband Internet connectivity. EVDO enables computer users to have high speed Internet access without the help of a hotspot. Just by inserting an EVDO card into the computer, users get connected to the Internet within seconds and have Net access at DSL-compatible speeds.


While traditional wireless networks assign a dedicated path between the source and destination for the entire duration of the call very similar to fixed-line telephone networks, EVDO transmits several users' data through a single channel using Code Division Multiple Access (CDMA) as well as Time Division Multiple Access (TDMA) to achieve higher throughput and better utilization of network bandwidth.

The standard underwent many revisions denoted as Rev. 0, Rev. A, and Rev. B and so on. Rev. 0 supports forward link speeds up to 2.4 Mbit/s. while Rev. A can go up to 3.1 Mbit/s. EVDO is part of the CDMA2000 family of standards and has been adopted by many service providers offering high speed broadband connectivity for mobile phone users through CDMA networks. It was developed by Qualcomm during the late 90s. Since the standard was a direct evolution from the 1xRTT standard which carried only data, it was initially called Evolution Data only. Later on, since the word 'only' seemed to add a negative connotation to the name, the name was switched to Evolution Data Optimized. Since the new name was more marketable and sounded more hi-tech, it stuck.

EVDO uses the current broadcast frequencies of existing CDMA networks which is a major advantage when compared to competing technologies which often require expensive hardware and software changes or upgrades to the network. Verizon and Sprint are the two major service providers in the US using EVDO. Verizon has implemented Rev. A throughout its network, and Sprint is rapidly catching up. While there is also a large presence of EVDO technology in Korea, it has made relatively no impact in Europe and countries in Asia which predominantly use the W-CDMA standard for high speed data access.

VSAT Technology


VSAT is an abbreviation for a Very Small Aperture Terminal. It is basically a two-way satellite ground station with a less than 3 meters tall (most of them are about 0.75 m to 1.2 m tall) dish antenna stationed. The transmission rates of VSATs are usually from very low and up to 4 Mbit/s. These VSATs' primary job is accessing the satellites in the geosynchronous orbit and relaying data from terminals in earth to other terminals and hubs. They will often transmit narrowband data, such as the transactions of credit cards, polling, RFID (radio frequency identification) data, and SCADA (Supervisory Control and Data Acquisition), or broadband data, such as satellite Internet, VoIP, and videos. However, the VSAT technology is also used for various types of communications.
 Equatorial Communications first used the spread spectrum technology to commercialize the VSATs, which were at the time C band (6 GHz) receive only systems. This commercialization led to over 30,000 sales of the 60 cm antenna systems in the early 1980s. Equatorial Communications sold about 10,000 more units from 1984 to 1985 by developing a C band (4 and 6 GHz) two way system with 1 m x 0.5 m dimensions.
In 1985, the current world's most used VSATs, the Ku band (12 to 14 GHz) was co-developed by Schlumberger Oilfield Research and Hughes Aerospace. It is primarily used to provide portable network connection for exploration units, particularly doing oil field drilling.
 Currently, the largest VSAT network consists of over 12,000 sites and is administered by Spacenet and MCI for the US Postal Service (USPS). Walgreens Pharmacy, Dollar General, CVS, Riteaid, Wal-Mart, Yum! Brands (such as Taco Bell, Pizza Hut, Long John Silver's, and other fast food chains), GTEC, SGI, and Intralot also utilizes large VSAT networks. Many huge car corporations such as Ford and General Motors also utilizes the VSAT technology, such as transmitting and receiving sales figures and orders, along with announcing international communications, service bulletins, and for distance learning courses. An example of this is the "FordStar Network."
Two way satellite Internet providers also use the VSAT technology. Companies like StarBand, WildBlue, and HughesNet in the United States and SatLynx, Bluestream, and Technologie Satelitarne in Europe, and many other broadband services around the world in rural areas where high speed Internet connections cannot be provided use it too. A statistic from December 2004 showed that over a million VSATs were in place.
VSAT technology has many advantages, which is the reason why it is used so widely today. One is availability. The service can basically be deployed anywhere around the world. Also, the VSAT is diverse in that it offers a completely independent wireless link from the local infrastructure, which is a good backup for potential disasters. Its deployability is also quite amazing as the VSAT services can be setup in a matter of minutes. The strength and the speed of the VSAT connection being homogenous anywhere within the boundaries is also a big plus. Not to forget, the connection is quite secure as they are private layer-2 networks over the air. The pricing is also affordable, as the networks themselves do not have to pay a lot, as the broadcast download scheme (eg. DVB-S) allows them to serve the same content to thousands of locations at once without any additional costs. Last but not least, most of the VSAT systems today use onboard acceleration of protocols (eg. TCP, HTTP), which allows them to delivery high quality connections regardless of the latency.
As with everything, VSAT also has its downsides. Firstly, because the VSAT technology utilizes the satellites in geosynchronous orbit, it takes a minimum latency of about 500 milliseconds every trip around. Therefore, it is not the ideal technology to use with protocols that require a constant back and forth transmission, such as online games. Also, surprisingly, the environment can play a role in slowing down the VSATs. Although not as bad as one way TV systems like DirecTV and DISH Network, the VSAT still can have a dim signal, as it still relies on the antenna size, the transmitter's power, and the frequency band. Last but not least, although not that big of a concern, installation can be a problem as VSAT services require an outdoor antenna that has a clear view of the sky. An awkward roof, such as with skyscraper designs, can become problematic.




Wireless modem


A wireless modem is a type of modem which connects to a wireless network instead of to the telephone system. When you connect with a wireless modem, you are attached directly to your wireless ISP (Internet Service Provider) and you can then access the Internet.

Mobile phones, smartphones, and PDAs can be employed as data modems to form a wireless access point connecting a personal computer to the Internet (or some proprietary network). In this use the mobile phone is providing a gateway between the cellular service provider's data network technology and Point-to-Point Protocol (PPP) spoken by PCs. Almost all current mobile phone models support the Hayes command set, a standard method of controlling modems. To the PC, the phone appears like an external modem when connected via serial cable, USB, IrDA infrared or Bluetooth wireless.

Wireless FireWire, USB and Serial modems are also used in the Wi-Fi and WiMAX standards, operating at microwave frequencies, to give a laptop, PDA or desktop computer an access point to a network. The modems may be as large as a regular cable modem to as small as a WiFi dongle/USB-stick. If combined with VoIP technology, these computing devices can achieve telephone-like capability to make and receive telephone calls.

PCMCIA, ExpressCard and Compact Flash modems are also used. These card-modems can also have GPS included.

While some analogue mobile phones provided a standard RJ11 telephone socket into which a normal landline modem could be plugged, this only provided slow dial-up connections, usually 2.4 kilobit per second (kbit/s) or less. The next generation of phones, known as 2G (for 'second generation'), were digital, and offered faster dial-up speeds of 9.6kbit/s or 14.4kbit/s without the need for a separate modem. A further evolution called HSCSD used multiple GSM channels (two or three in each direction) to support up to 43.2kbit/s. All of these technologies still required their users to have a dial-up ISP to connect to and provide the Internet access - it was not provided by the mobile phone network itself.

The release of 2.5G phones with support for packet data changed this. The 2.5G networks break both digital voice and data into small chunks, and mix both onto the network simultaneously in a process called packet switching. This allows the phone to have a voice connection and a data connection at the same time, rather than a single channel that has to be used for one or the other. The network can link the data connection into a company network, but for most users the connection is to the Internet. This allows web browsing on the phone, but a PC can also tap in to this service if it connects to the phone. The PC needs to send a special telephone number to the phone to get access to the packet data connection. From the PC's viewpoint, the connection still looks like a normal PPP dial-up link, but it is all terminating on the phone, which then handles the exchange of data with the network. Speeds on 2.5G networks are usually in the 30–50kbit/s range.

3G networks have taken this approach to a higher level, using different underlying technology but the same principles. They routinely provide speeds over 300kbit/s. Due to the now increased internet speed, internet connection sharing via WLAN has become a workable reality. A particularly popular project has been the Stomp Box, which is a router which shares GPRS internet access via WiFi.
A further evolution is the 3.5G technology HSDPA, which has the capacity to provide speeds of multiple Megabits per second.

WiMax has now also been announced, which will allow internet connection sharing over WANs (being region-wide, as opposed to local with WiFi), effectively perhaps eliminating the need of wireless modems. This, of course, only in areas where WiMax is to be introduced (eg cities).


Friday, October 2, 2009

Latest Laptops..............


A laptop is a personal computer designed for mobile use and small and light enough to sit on one's lap while in use.A laptop integrates most of the typical components of a desktop computer, including a display, a keyboard, a pointing device (a touchpad, also known as a trackpad, and/or a pointing stick), speakers, and often including a battery, into a single small and light unit. The rechargeable battery (if present) is charged from an AC adapter and typically stores enough energy to run the laptop for two to three hours in its initial state, depending on the configuration and power management of the computer.

Laptops are usually shaped like a large notebook with thicknesses between 0.7–1.5 inches (18–38 mm) and dimensions ranging from 10x8 inches (27x22cm, 13" display) to 15x11 inches (39x28cm, 17" display) and up. Modern laptops weigh 3 to 12 pounds (1.4 to 5.4 kg); older laptops were usually heavier. Most laptops are designed in the flip form factor to protect the screen and the keyboard when closed. Modern tablet laptops have a complex joint between the keyboard housing and the display, permitting the display panel to swivel and then lie flat on the keyboard housing. They usually have a touchscreen display and some include handwriting recognition or graphics drawing capability.

Laptops were originally considered to be "a small niche market" and were thought suitable mostly for "specialized field applications" such as "the military, the Internal Revenue Service, accountants and sales representatives". But today, there are already more laptops than desktops in businesses, and laptops are becoming obligatory for student use and more popular for general use. In 2008 more laptops than desktops were sold in the US and it has been predicted that the same milestone will be reached in the worldwide market as soon as late 2009.

Replaced Desktop:

A desktop replacement computer is a laptop that provides most of the capabilities of a desktop computer, with a similar level of performance. Desktop replacements are usually larger and heavier than standard laptops. They contain more powerful components and have a 15" or larger display.Because of their bulk, they are not as portable as other laptops and their operation time on batteries is typically shorter; instead, they are meant to be used as a more compact, easier to carry alternative to a desktop computer.


Some laptops in this class use a limited range of desktop components to provide better performance for the same price at the expense of battery life; in a few of those models, there is no battery at all and the laptop can only be used when plugged in. These are sometimes called desknotes, a portmanteau of the words "desktop" and "notebook," though the term can also be applied to desktop replacement computers in general.

In the early 2000s, desktops were more powerful, easier to upgrade, and much cheaper in comparison with laptops. But in the last few years, the advantages have drastically changed or shrunk since the performance of laptops has markedly increased.In the second half of 2008, laptops have finally outsold desktops for the first time ever. In the U.S., the PC shipment declined 10 percent in the fourth quarter of 2008. In Asia, the worst PC shipment growth went up 1.8 percent over the same quarter the previous year since PC statistics research started.

The names "Media Center Laptops" and "Gaming Laptops" are also used to describe specialized members of this class of notebooks.

3G TO 4G Technology..........

4G refers to the fourth generation of cellular wireless and is a successor to 3G and 2G standards. The rest of this article associates 4G with International Mobile Telecommunications-Advanced (IMT Advanced), though 4G is a broader term and could include standards outside IMT-Advanced. A 4G system may upgrade existing communication networks and is expected to provide a comprehensive and secure IP based solution where facilities such as voice, data and streamed multimedia will be provided to users on an "Anytime, Anywhere" basis and at much higher data rates compared to previous generations.

4G is being developed to accommodate the QoS and rate requirements set by forthcoming applications like wireless broadband access, Multimedia Messaging Service (MMS), video chat, mobile TV, HDTV content, Digital Video Broadcasting (DVB), minimal services like voice and data, and other services that utilize bandwidth.

The 4G working group has defined the following as objectives of the 4G wireless communication standard:

1.A spectrally efficient system (in bits/s/Hz and bits/s/Hz/site),


2.High network capacity: more simultaneous users per cell,

3.A nominal data rate of 100 Mbit/s while the client physically moves at high speeds relative to the station, and 1 Gbit/s while client and station are in relatively fixed positions as defined by the ITU-R,

4.A data rate of at least 100 Mbit/s between any two points in the world,

5.Smooth handoff across heterogeneous networks,

6.Seamless connectivity and global roaming across multiple networks,

7.High quality of service for next generation multimedia support (real time audio, high speed data, HDTV video content, mobile TV, etc)

8.Interoperability with existing wireless standards, and

9.An all IP, packet switched network.

In summary, the 4G system should dynamically share and utilize network resources to meet the minimal requirements of all the 4G enabled users.


According to the members of the 4G working group, the infrastructure and the terminals of 4G will have almost all the standards from 2G to 4G implemented. Although legacy systems are in place to adopt existing users, the infrastructure for 4G will be only packet-based (all-IP). Some proposals suggest having an open Internet platform. Technologies considered to be early 4G include: Flash-OFDM, the 802.16e mobile version of WiMax (also known as WiBro in South Korea), and HC-SDMA (see iBurst). 3GPP Long Term Evolution may reach the market 1–2 years after Mobile WiMax is released.


An even higher speed version of WiMax is the IEEE 802.16m specification. LTE Advanced will be the later evolution of the 3GPP.

Principal Technologies:

1.Baseband techniques


2.OFDM: To exploit the frequency selective channel property

3.MIMO: To attain ultra high spectral efficiency

4.Turbo principle: To minimize the required SNR at the reception side

5.Adaptive radio interface

6.Modulation, spatial processing including multi-antenna and multi-user MIMO

7.Relaying, including fixed relay networks (FRNs), and the cooperative relaying concept, known as multi-  mode protocol.

History Of 3G Cell........

The first pre-commercial 3G network was launched by NTT DoCoMo in Japan branded FOMA, in May 2001 on a pre-release of W-CDMA technology. The first commercial launch of 3G was also by NTT DoCoMo in Japan on October 1, 2001, although it was initially somewhat limited in scope; broader availability was delayed by apparent concerns over reliability. The second network to go commercially live was by SK Telecom in South Korea on the 1xEV-DO technology in January 2002. By May 2002 the second South Korean 3G network was by KTF on EV-DO and thus the Koreans were the first to see competition among 3G operators.


The first European pre-commercial network was at the Isle of Man by Manx Telecom, the operator then owned by British Telecom, and the first commercial network in Europe was opened for business by Telenor in December 2001 with no commercial handsets and thus no paying customers. These were both on the W-CDMA technology.

The first commercial United States 3G network was by Monet Mobile Networks, on CDMA2000 1x EV-DO technology, but this network provider later shut down operations. The second 3G network operator in the USA was Verizon Wireless in October 2003 also on CDMA2000 1x EV-DO, and this network has grown strongly since then.

The first pre-commercial demonstration network in the southern hemisphere was built in Adelaide, South Australia by m.Net Corporation in February 2002 using UMTS on 2100 MHz. This was a demonstration network for the 2002 IT World Congress. The first commercial 3G network was launched by Hutchison Telecommunications branded as Three in March 2003.

In December 2007, 190 3G networks were operating in 40 countries and 154 HSDPA networks were operating in 71 countries, according to the Global Mobile Suppliers Association (GSA). In Asia, Europe, Canada and the USA, telecommunication companies use W-CDMA technology with the support of around 100 terminal designs to operate 3G mobile networks.

In Europe, mass market commercial 3G services were introduced starting in March 2003 by 3 (Part of Hutchison Whampoa) in the UK and Italy. The European Union Council suggested that the 3G operators should cover 80% of the European national populations by the end of 2005.

Roll-out of 3G networks was delayed in some countries by the enormous costs of additional spectrum licensing fees. (See Telecoms crash.) In many countries, 3G networks do not use the same radio frequencies as 2G, so mobile operators must build entirely new networks and license entirely new frequencies; an exception is the United States where carriers operate 3G service in the same frequencies as other services. The license fees in some European countries were particularly high, bolstered by government auctions of a limited number of licenses and sealed bid auctions, and initial excitement over 3G's potential. Other delays were due to the expenses of upgrading equipment for the new systems.

By June 2007 the 200 millionth 3G subscriber had been connected. Out of 3 billion mobile phone subscriptions worldwide this is only 6.7%. In the countries where 3G was launched first - Japan and South Korea - 3G penetration is over 70%.[11] In Europe the leading country is Italy with a third of its subscribers migrated to 3G. Other leading countries by 3G migration include UK, Austria, Australia and Singapore at the 20% migration level. A confusing statistic is counting CDMA 2000 1x RTT customers as if they were 3G customers. If using this definition, then the total 3G subscriber base would be 475 million at June 2007 and 15.8% of all subscribers worldwide.

Still, several developing countries have not awarded 3G licenses and customers await 3G services. China delayed its decisions on 3G for many years, mainly because of their Government's delay in establishing well defined standards.[12] China announced in May 2008, that the telecoms sector was re-organized and three 3G networks would be allocated so that the largest mobile operator, China Mobile, would retain its GSM customer base. China Unicom would retain its GSM customer base but relinquish its CDMA2000 customer base, and launch 3G on the globally leading WCDMA (UMTS) standard. The CDMA2000 customers of China Unicom would go to China Telecom, which would then launch 3G on the CDMA 1x EV-DO standard. This meant that China would have all three main cellular technology 3G standards in commercial use. Finally in January 2009, Ministry of industry and Information Technology of China has awarded licenses of all three standards,TD-SCDMA to China Mobile, WCDMA to China Unicom and CDMA2000 to China Telecom.

In November 2008, Turkey has auctioned four IMT 2000/UMTS standard 3G licenses with 45, 40, 35 and 25 MHz top frequencies. Turkcell has won the 45MHz band with its €358 million offer followed by Vodafone and Avea leasing the 40 and 35MHz frequencies respectively for 20 years. The 25MHz top frequency license remains to be auctioned.

The first African use of 3G technology was a 3G videocall made in Johannesburg on the Vodacom network in November 2004. The first commercial launch of 3G in Africa was by EMTEL in Mauritius on the W-CDMA standard. In north African Morocco in late March 2006, a 3G service was provided by the new company Wana.

Telus first introduced 3G services in Canada in 2005. Rogers Wireless began implementing 3G HSDPA services in eastern Canada early 2007 in the form of Rogers Vision. Fido Solutions and Rogers Wireless now offer 3G service in most urban centres.

T-Mobile, a major Telecommunication services provider has recently rolled out a list of over 120 U.S. cities which will be provided with 3G Network coverage in the year 2009.

In 2008, India entered into 3G Mobile arena with the launch of 3G enabled Mobile services by Mahanagar Telephone Nigam Limited (MTNL). MTNL is the first Mobile operator in India to launch 3G service.

3G Cell Phone

International Mobile Telecommunications-2000 (IMT-2000), better known as 3G or 3rd Generation, is a family of standards for mobile telecommunications defined by the International Telecommunication Union, which includes GSM EDGE, UMTS, and CDMA2000 as well as DECT and WiMAX. Services include wide-area wireless voice telephone, video calls, and wireless data, all in a mobile environment. Compared to 2G and 2.5G services, 3G allows simultaneous use of speech and data services and higher data rates (up to 14.0 Mbit/s on the downlink and 5.8 Mbit/s on the uplink with HSPA+). Thus, 3G networks enable network operators to offer users a wider range of more advanced services while achieving greater network capacity through improved spectral efficiency.


The International Telecommunication Union (ITU) defined the third generation (3G) of mobile telephony standards – IMT-2000 – to facilitate growth, increase bandwidth, and support more diverse applications. For example, GSM (the current most popular cellular phone standard) could deliver not only voice, but also circuit-switched data at download rates up to 14.4 kbps. But to support mobile multimedia applications, 3G had to deliver packet-switched data with better spectral efficiency, at far greater bandwidths.

In 1999, ITU approved five radio interfaces for IMT-2000 as a part of the ITU-R M.1457 Recommendation; WiMAX was added in 2007.

There are evolutionary standards that are backwards-compatible extensions to pre-existing 2G networks as well as revolutionary standards that require all-new networks and frequency allocations.The later group is the UMTS family, which consists of standards developed for IMT-2000, as well as the independently-developed standards DECT and WiMAX, which were included because they fit the IMT-2000 definition.







Application Gadget


Computer programs that provide services without needing an independent application to be launched for each one, but instead run in an environment that manages multiple gadgets. There are several implementations based on existing software development techniques, like JavaScript, form input, and various image formats.


Further information: Google Desktop, Google Gadgets, Microsoft Gadgets, and Dashboard software Apple Widgets



The earliest documented use of the term gadget in context of software engineering was in 1985 by the developers of AmigaOS, the operating system of the Amiga computers Intuition Amigaintuition.library and also later gadtools.library'. It denotes what other technological traditions call GUI widget—a control element in graphical user interface. This naming convention remains in continuing use (as of 2008) since then.
It is not known whether other software companies are explicitly drawing on that inspiration when featuring the word in names of their technologies or simply referring to the generic meaning. The word widget is older in this context.

Electronic gadgets are based on transistors and integrated circuits. Unlike the mechanical gadgets one needs a source of electric power to use it. The most common electronic gadgets include transistor radio, television, cell phones and the quartz watch.

The latest Gadget


A gadget is a small technological object (such as a device or an appliance) that has a particular function, but is often thought of as a novelty. Gadgets are invariably considered to be more unusually or cleverly designed than normal technological objects at the time of their invention. Gadgets are sometimes also referred to as gizmos.

The origins of the word "gadget" trace back to the 19th century. According to the Oxford English Dictionary, there is anecdotal evidence for the use of "gadget" as a placeholder name for a technical item whose precise name one can't remember since the 1850s; with Robert Brown's 1886 book Spunyarn and Spindrift, A sailor boy’s log of a voyage out and home in a China tea-clipper containing the earliest known usage in print. Also according to Michael Quinion: Port Out, Starboard Home: The Fascinating Stories We Tell About the Words We Use. The etymology of the word is disputed. A widely circulated story holds that the word gadget was "invented" when Gaget, Gauthier & Cie, the company behind the repoussé construction of the Statue of Liberty (1886), made a small-scale version of the monument and named it after their firm; however this contradicts the evidence that the word was already used before in nautical circles, and the fact that it did not become popular, at least in the USA, until after World War I Other sources cite a derivation from the French gâchette which has been applied to various pieces of a firing mechanism, or the French gagée, a small tool or accessory. The spring-clip used to hold the base of a vessel during glass-making is also known as a gadget. The first atomic bomb was nicknamed the gadget by the scientists of the Manhattan Project, tested at the Trinity site.
In the book "Above the Clouds" by Vivian Drake, published in 1918 by D. Appleton & Co., of New York and London, being the memoirs of a pilot in the Royal Flying Corps, there is the following passage: "Our ennui was occasionally relieved by new gadgets -- "gadget" is the Flying Corps slang for invention! Some gadgets were good, some comic and some extraordinary."