Monday 26 March 2012

JAMES CAMERON MAKES FIRST EVER SUCCESSFUL SOLO DIVE TO MARIANA TRENCH

Be The First To Comment


Filmmaker and National Geographic Explorer-in-Residence Successfully Completes Dive To Ocean's Deepest Point during DEEPSEA CHALLENGE Expedition

WASHINGTON (March 26, 2012)--Filmmaker and National Geographic Explorer-in-Residence James Cameron descended 35,756 feet (6.77 miles/10.89 km) to reach the "Challenger Deep," the ocean's deepest point located in the Mariana Trench, in his specially designed submersible DEEPSEA CHALLENGER.

The attempt was part of DEEPSEA CHALLENGE, a joint scientific expedition by Cameron, National Geographic and Rolex to conduct deep-ocean research and exploration. Cameron is the only individual ever to complete the dive in a solo vehicle and the first person since 1960 to reach the very bottom of the world in a manned submersible. During the dive, he conducted the first manned scientific exploration of the "Challenger Deep."

The submersible was launched into the Pacific Ocean some 200 miles (322 km) southwest of Guam on Monday, March 26, at 5:15 a.m., local Guam time (Sunday, March 25, at 3:15 p.m., Eastern Time). The voyage down to the "Challenger Deep" took two hours and 36 minutes. Cameron resurfaced at 12 noon local Guam time on Monday, March 26 (10 p.m. Eastern Time on Sunday, March 25). The submersible -- the result of a more-than-seven-year engineering effort -- stayed on the bottom for about three hours as Cameron collected samples for research in marine biology, microbiology, astrobiology, marine geology and geophysics. Cameron also captured still photographs and moving images to visually document the Mariana Trench.

"This journey is the culmination of more than seven years of planning for me and the amazing DEEPSEA CHALLENGE expedition team," said Cameron. "Most importantly, though, is the significance of pushing the boundaries of where humans can go, what they can see and how they can interpret it. Without the support of National Geographic and Rolex, and their unwavering belief that we could successfully make it to the deepest point in the ocean -- and back -- this would not have happened."


"We join the rest of the world in celebrating the exhilarating achievement of Jim Cameron and the DEEPSEA CHALLENGE expedition team," said Terry Garcia, National Geographic's executive vice president of Mission Programs. "In 2012 we are still exploring largely unknown places -- as National Geographic has been doing for nearly 125 years. I'm delighted to say that the golden age of exploration and discovery continues." Details on the expedition can be found at www.DEEPSEACHALLENGE.com; on Twitter by following @DeepChallenge or using #deepseachallenge; or on Facebook at https://www.facebook.com/deepseachallenge.

The "Challenger Deep" has only been reached once before in a manned descent, on Jan. 23, 1960, by then U.S. Navy Lt. Don Walsh -- who is a consultant on the DEEPSEA CHALLENGE expedition and was aboard the expedition ship Mermaid Sapphire during Cameron's successful attempt -- and Swiss oceanographer Jacques Piccard in the bathyscaphe Trieste. Walsh and Piccard spent about 20 minutes on the ocean floor before returning to the surface.

With breakthroughs in materials and science, unique approaches to structural engineering and new ways of imaging through an ultra-small, full ocean depth-rated stereoscopic camera, Cameron was able to launch the DEEPSEA CHALLENGE expedition, which he hopes will shed light on other virtually unknown deep-water habitats, such as the New Britain Trench and the Sirena Deep.

Cameron's CAMERON | PACE Group, which supplies 3-D technologies /and production support services, provided the capability to document today's historic dive in high-resolution 3-D.



In 1960, an experimental Rolex Deep Sea Special watch was strapped to the hull of the Trieste and emerged in perfect working order after withstanding the huge pressure exerted nearly 7 miles (nearly 11 km) below the surface. The DEEPSEA CHALLENGER submersible today carried a new, experimental wristwatch, the Rolex Deepsea Challenge, attached to the manipulator arm, renewing the pioneering engineering challenge the Swiss watchmaker took up 52 years ago.

"Rolex warmly congratulates James Cameron and the DEEPSEA CHALLENGE expedition team for their successful dive into history, in the vanguard of a new and exciting era of marine exploration," said Gian Riccardo Marini, Chief Executive Officer of Rolex SA. "The achievement is a product of their passion, courage, skill and the highest standards of excellence and innovation in advancing human knowledge. We are delighted to be part of DEEPSEA CHALLENGE, perpetuating half a century of tradition in deep-sea diving."

Two of Cameron's passions -- filmmaking and diving -- blend in his feature and documentary films. While working on "Titanic," he took 12 submersible dives to the famed shipwreck two-and-a-half miles down in the North Atlantic. The technical success of that expedition led Cameron to form Earthship Productions, which develops films about ocean exploration and conservation. Since then he has led six expeditions, authored a forensic study of the Bismarck wreck site and done extensive 3-D imaging of deep hydrothermal vent sites along the Mid-Atlantic Ridge, the East Pacific Rise and the Sea of Cortez. Cameron has made more than 70 deep submersible dives, including a total of 33 to Titanic. Fifty-one of these dives were in Russian Mir submersibles to depths of up to 3.03 miles (4.87 km).



The DEEPSEA CHALLENGE expedition is being chronicled for a 3-D feature film for theatrical release on the intensive technological and scientific efforts behind this historic dive -- which will subsequently be broadcast on the National Geographic Channel -- and is being documented for National Geographic magazine. Cameron also will collaborate with National Geographic to create broad-based educational outreach materials.

Additional major funding for the 3-D feature film, education and digital outreach has been provided by the Alfred P. Sloan Foundation, which supports original research and public understanding of science, technology, engineering and mathematics.

Scripps Institution of Oceanography, UC San Diego, is the DEEPSEA CHALLENGE's primary science collaborator. For nearly a decade, Scripps has been involved with Cameron in developing new ways to explore and study the deepest parts of the oceans. With its decades-long history of deep-sea exploration, Scripps is recognized as a world leader in investigating the science of the deep ocean, from exploring the deep's geological features to researching its exotic marine life inhabitants.

The expedition also is collaborating with the University of Hawaii, Jet Propulsion Laboratory and the University of Guam.

Permits for the "Challenger Deep" research were secured from the Federated States of Micronesia. The majority of the Mariana Trench is now a U.S. protected zone under a 2009 proclamation by President George W. Bush that established the Marianas Trench Marine National Monument and gave management responsibility to the U.S. Fish and Wildlife Service in consultation with the National Marine Fisheries Service. The U.S. Fish and Wildlife Service issued permits for dives in the U.S. areas of the trench.

Monday 12 March 2012

Introduction to Lossless Audio Encoding

Be The First To Comment
Lossless Encoding is a method of encoding that allows no loss of data while converting to and from encoded data to normal data.

Lossless Audio Encoding

Q: Why special encoding methods for audio?

A : Lossless encoding is not only done in audio, video etc. We actually have a ton of genric lossless algorithms like gzip, RAR etc. Which obviously can be used to compress audio as well, but we know that digital audio is in a specific wave form which is often used as an advantage to create these algorithms. So, in short “We use special encoding methods for audio because we can save more data.”.

Audio Codecs specify a set of tested methods and instructions to follow, to actually implement lossless encoding. Any operations an encoder does should be well defined in the audio specifications and bit-exact definition should be provided so as to make sure that encoder follows what the codec intends it to.

Q : How is Encoding done?

A : First the audio is divided into further parts i.e frames which contains a certain number of samples. Then different encoding methods are employed on these frames which is what we'll discuss in this article.

Q : How many samples are there in a frame?

A : Actually it depends on the codec, Some codecs specify a fixed size but let us divide each frame into sub-frames, some codecs specify a set of possible sizes but we have to chose one size for the whole file and some codecs let us change the size in each frame.

Example :-

in Raw PCM the size is set to 1 i.e we'll use 1 sample per frame.

Encoding

There are a number of ways in which different codecs are doing lossess encoding and in this tutorial we'll try to cover the basics of some of them.
The encoder checks if the data has repetition of a specific byte whole over the frame, this the most simple case, and then the encoder has a way to trigger this signal i.e 'This frame is all value x'. if this isn't the case the encoder moves on to the next method
Then the encoder finds corellation between the different channels, for eg: in stereo the left and right channels have a lot of simmilarities, For stereo its typically called mid/side.
Linear Prediction

At the most rudimentry level Linear Prediction (LP), is to assume the sample is the same as the last one. The difference between the predicted sample and the actual sample becomes what we call the residual sample.

Types :-
Linear Prediction with an order of 1
This involves 1 coeficiant. This coeficiant can be 1,2 etc. If its 1 we simply miltiply the previous sample by 1 and subtract it from the current sample to get the residual sample, same goes for 2, 3 etc.
Linear Prediction with and order of 2
This involves 2 coeficiants. The algorithms can be deminsified in the following points :-
First the algorithm checks for 2 samples ago, multiply it by the 2nd coeficiant and add it to the running total
Then it checks for the previous sample, multiply it with the 1st coeficant and it to the running total.
Thirdly, it subtracts the running total from the present sample to get the residual.

Q : What does the codec do with the intial samples i.e 1st sample in LP with an order of 1 and 1st and 2nd sample in LP with an order of 2?

A : Again that depends on the codec, FLAC encodes the first N samples as normal PCM (where N is the order of Linear Prediction Filter), and then it starts LP encoding from origion + maximum order (where maximum order is the order of samples for the LP encoding), but with ALS its totally different ALS actually does a progressive prediction at the beginning, so the first sample is Raw-PCM (0-order), 2nd sample uses 1st order , 3rd order uses 2nd order ... upto max order.

Pitch Prediction

In signals that are very tonal (samples look more like a sine wave), so they have almost same wavelengths and amplitudes (resemblence to a sine wave), in this case the sample 1 wavelength in past has a better resemblence and is likely to be a better predictor of the data, so instead of looking previous N samples we use these, the encoder can then have some functionality to analyse this data and see if this is beneficial and then usually codes the distance in the past and maybe some scale factor to compensate for changing loudness.

Some Lossless codecs also includes Checksum data to verify if the encoded audio is same as that of original. So, normally there is a large checksum (normally md5) of the original audio transmitted in the stream so that the user can check the decoded result.

Conclusion

Lossless encoding is a vast topic and this tutorial not even covers 50% of it, What we have tried in this tutorial is to learn some basic foundation of lossless encoding and how is it done.

Tell the World ...

Sunday 11 March 2012

7 Reasons Why Android OS Poses a Threat to Windows 8

Be The First To Comment
The age of the smartphones and tablets has come and traditional devices like PCs and laptops are losing their touch.



These days consumers are looking for devices that satisfy specific needs. The operating system of the device plays the most important role in ensuring true multitasking while making for the smooth run of apps. Higher virtual memory can also address this by allowing the operating system to use the same memory locations for multiple tasks instead of depending on the RAM memory where space is allocated for each individual application tasks.

However, the past couple of years have been remarkable for smartphones and tablets, with the meteoric rise of Google’s Android OS. The new life in Microsoft’s mobile strategy with its much-ballyhooed Windows 8 OS brings a new twist as the American multinational corporation plans to make the new OS to power smartphones, tablets and PCs around the world. The Windows 8, which was announced in the CES, would add support for ARM microprocessors in addition to the traditional x86 microprocessors from Intel and AMD. ARM CPUs are generally in the form of SoCs found in mobile devices which implies that Windows 8 will be compatible with mobile devices such as netbooks, tablet personal computers and smartphones. The other player which has purely magnified its echo system is Apple. It uses iOS on most of their high-end devices like iPhone, iPod Touch and iPad.



In fact, Windows 8 is focused on tablets as Microsoft expects to gain gain a foothold in that sector. For now Apple has laid bare the claim of being the best tablet maker and manufacturer as iPad is yet to find a worthy competitor. Meanwhile Apple, along with Google, has positioned itself as top competitors when it comes to OS platforms for smaller devices while Microsoft still leads the larger device platform around the world with its Windows operating system.

But Apple is quietly and steadily gaining ground with Mac OS shipment. Even Google plans to introduce a separate Chrome OS with cloud computing ability.

During 1984 Microsoft Windows came to dominate the world's personal computer market, overtaking Mac OS. Although Windows XP support for personal computers was stopped by Microsoft, it still leads the net market with a 52.46 percent share. Compared to any OS versions of Microsoft, it is considered to be the best operating system by many users. And secondly, Windows 7 has come in the second place, and has attracted many consumers around the world. The new approach by Microsoft to combine mobile operating system with unportable ones shows the way forward as the market giant is expected make the cut with new OS platform.

Operating systems are the hearts of the computer systems. Windows was the most famous operating system but Android OS is up to the mark nowadays. However, Microsoft’s plan of introducing a single operating system for all platforms may backfire.

Here is a look at 8 reasons why Android operating system will gain score against Windows 8 in the mobile platform:

Various Gadget Option:

Microsoft is slowly tying up with various mobile hardware manufacturers to run their Windows Phone 7 OS on different devices. However, Android OS has found shelter in most of the mobile devices being offered by Samsung, Motorola, HTC, Acer, Dell, Sony Ericsson, LG, Lenovo, Archos, Toshiba etc. Apart from tablets and smarpthones even E-reader device is supported by Android platform and is expected to come along with Amazon’s Kindle tablet soon. Meanwhile, Windows Phone 7 is most probably only found in Acer, Dell, HTC, LG and Samsung mobile devices. Hence Windows 8 being compatible with mobile devices may take some time.

Operating System:
Also Windows 8, when released, will still be a new platform combining the features of Windows Phone 7, Windows 7 and XP. Actually Windows Vista received critical reception as it was plagued by many errors. In the first year of availability, PC World rated it as the biggest tech disappointment of 2007 and it was rated by InfoWorld as #2 of Tech's all-time 25 flops.

Windows 8 features a new start screen similar to the one in Windows Phone 7 which includes live application tiles. It replaces the Start menu, being triggered by the Start button or Windows key, and is also the first screen shown on startup. The user can go to the regular desktop by choosing the desktop tile or a traditional desktop-based application.

Rather than creating a new operating system or tablet or use the existing Windows Phone 7 as the basis for a Microsoft-powered tablet, the company is planning to use an update to the traditional Windows PC operating system.

Microsoft must learn from Apple. Apple, while announcing iPad, showed off early versions of the iWork apps with Numbers, Pages and Keynote. Those apps were different from Mac equivalents and it was specifically optimized for the tablet form factor and the size of your fingertips. iOS is nowhere similar to Mac OS. The main reason why Apple’s iPad and iPhone were a huge hit was that it did not attempt in any way to replicate the desktop PC experience which Windows Mobile-powered Windows tablets did. Steve Jobs has always been skeptical about it and is certainly betting that users don’t want to reach out and touch their monitors. The best example was HP Slate which was powered by Windows, but didn’t do quite well in the market. Hence the idea of bringing the same desktop operating system may not enhance the productivity of Microsoft.

Windows Phone 7, on the other hand, is purely designed for mobile devices; hence it is a better prospect for Microsoft to continue to improve on the OS. It has good interface with Live Tiles and the ability to bring items together.

Mobile systems have gotten better and faster. The advent of this Android OS certainly allowed the onset of better service and features to the end user. Users are currently enjoying larger applications similar to what happened in the 90s when they started to enjoy SMS, MMS, etc. The two most popular mobile operating systems are iOS and Android and these two hold the majority of market share. Presently Android holds 36 percent of the smartphone market, compared to iOS's 16.8 percent. Microsoft stands at 3.6 percent as of 2011 Q1 report by Gartner.

Apps:
Android and iOS have built an app-centered approach. Android has got close to 500,000+ estimated apps while official figure shows 250,000+ apps. Even Apple has more than 500,000+ apps. And there are more than close to 90,000 apps specifically dedicated to iPad. Windows Marketplace has 9,000+ apps and it is still playing the catch-up game.

When Android Market is considered, one can find a variety of ways to find and download Android apps. The apps are available both on Android devices and the Web. Downloading apps on the Android device or on the PC or Mac is possible and later it can be transferred to the mobile device.

The major drawback of Windows 8 for mobile platform is that the design of the operating system is not app-centric. It is built on a design to deliver information efficiently so you can complete the job at hand and move on to something else.

Brand Prominence:
Google has earned itself the top position in the market and is approaching a phase where it can overtake iOS devices. Android is one of the best mobile OS platforms and the upcoming Ice Cream Sandwich already enjoys a fair bit of credibility. However, it is not the same with Windows 8, as it has yet to prove its mettle in tablets and smartphone segment.

Features and service:
Most of us use Google services like Gmail, Google Calendar and Google Documents. And iOS is centered on MobileMe and iTunes. Having an Android device makes the experience more pleasant. With the popularity of Google products, you are assured of getting onto a good team. Android phones have made a massive impact on the mobile industry.

When it comes to features, Android offers a number of capabilities that competing smartphones don't have. It includes built-in voice search and voice control features, so you can do things like initiate phone calls, search the Web, compose messages and send e-mail by talking rather than tapping. Android also features tethering via Wi-Fi, USB or Bluetooth, so you can use it to share your Internet connection with other devices, such as a laptop, a tablet or another smartphone. There is also support for NFC.

Microsoft focuses more on tools and services. Windows 8 as a platform is clearly centered on Microsoft applications and cloud based services. Also there is no universal in-box in Windows mobile OS platform unlike Android and iPhone where all the e-mail messages from multiple services are shown in a single location. And the ability to tether via Wi-Fi, USB or Bluetooth is not found in Windows Phone 7 and as Windows 8 is mostly centered on desktop like features.

Customization and Open Source: :
Openness is another point which may hit Microsoft. When it comes to openness, Microsoft's policy on Windows Phone 7 is closer to Apple's stance on iOS than it is to Google's approach to Android. Android is open source, which means that manufacturers and wireless providers can customize it in any way they want.

Android OS is specifically designed from the start to be customizable and it can be tweaked more than iOS and Windows 8. One of Android's biggest strengths is its flexibility. Unlike Apple and its iPhone, Google lets users and third-party developers tweak just about every aspect of the Android interface, and the customization options are nearly endless. From the desktop wallpaper to the notification sounds to the blinking LED indicator light, Android is easy to personalize. Widgets come in all shapes and sizes. Several are preloaded on your phone, but many others are available either as stand-alone downloads or as part of full-fledged applications in the Android Market. Depending on your device, using hotkeys to navigate your phone might save you some time. Android has its own built-in set of keyboard shortcuts, but you can also create your own. However, the same will most probably not be allowed on Windows 8.

It may be too late:
Quite a few analysts predict that Microsoft’s decision to jump into the tablet space now is a mistake. They say that Windows 7 should have been more tablet-friendly, which would have given Microsoft more time to try and limit Android’s success.

New Sony Technology : The NextGen Computer

Be The First To Comment
Technology is finally catching up to the imaginations of movie creators. The main item in this post is Sony’s new Nextep Computer. Even though this computer isn’t set for released until 2020, it still shows how far we have come in the last 50 years. I’m stillwaiting for the robot Companions like in Blade Runner, Cherry 2000, Weird Science and even a little something for the women like in A.I Artificial Intelligence, but this will do for now.

The Nextep Computer is made out of a super flexible OLED touchscreen, it has a holographic projector(for screen), pull-out extra keyboard panels and social networking compatibilities. Those are just some of its specs and functions as of today, so imagine what all this device will do in 10 years when released.








Amazing Transparent iPhone 4G concept

Be The First To Comment

See this pic.. Its Amazing iPhone 4G concept, I Got this Picture On Google Images, This look is really awesome ..... a big wowwwwwwwww to Transparent iPhone 4G ... butnin the Pic Its Looks bigger Then Current iPhones,they should move this from concept to reality.

Here is a list of other concepts that should be “moved to reality” first:
Flying cars
Antigravity
Faster-than-light travel
Food replicators
An end to hunger
World peace
Just to name a few.
Then, we can worry about transparent circuitry and the resulting transparent iPhone

Sixth Sense Technology

Be The First To Comment
integrating information with the real world

'SixthSense' is a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information."


We've evolved over millions of years to sense the world around us. When we encounter something, someone or some place, we use our five natural senses to perceive information about it; that information helps us make decisions and chose the right actions to take. But arguably the most useful information that can help us make the right decision is not naturally perceivable with our five senses, namely the data, information and knowledge that mankind has accumulated about everything and which is increasingly all available online. Although the miniaturization of computing devices allows us to carry computers in our pockets, keeping us continually connected to the digital world, there is no link between our digital devices and our interactions with the physical world. Information is confined traditionally on paper or digitally on a screen. SixthSense bridges this gap, bringing intangible, digital information out into the tangible world, and allowing us to interact with this information via natural hand gestures. ‘SixthSense’ frees information from its confines by seamlessly integrating it with reality, and thus making the entire world your computer.

The SixthSense prototype is comprised of a pocket projector, a mirror and a camera. The hardware components are coupled in a pendant like mobile wearable device. Both the projector and the camera are connected to the mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks user's hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tip of the user’s fingers using simple computer-vision techniques. The movements and arrangements of these fiducials are interpreted into gestures that act as interaction instructions for the projected application interfaces. The maximum number of tracked fingers is only constrained by the number of unique fiducials, thus SixthSense also supports multi-touch and multi-user interaction.

The SixthSense prototype implements several applications that demonstrate the usefulness, viability and flexibility of the system. The map application lets the user navigate a map displayed on a nearby surface using hand gestures, similar to gestures supported by Multi-Touch based systems, letting the user zoom in, zoom out or pan using intuitive hand movements. The drawing application lets the user draw on any surface by tracking the fingertip movements of the user’s index finger. SixthSense also recognizes user’s freehand gestures (postures). For example, the SixthSense system implements a gestural camera that takes photos of the scene the user is looking at by detecting the ‘framing’ gesture. The user can stop by any surface or wall and flick through the photos he/she has taken. SixthSense also lets the user draw icons or symbols in the air using the movement of the index finger and recognizes those symbols as interaction instructions. For example, drawing a magnifying glass symbol takes the user to the map application or drawing an ‘@’ symbol lets the user check his mail. The SixthSense system also augments physical objects the user is interacting with by projecting more information about these objects projected on them. For example, a newspaper can show live video news or dynamic information can be provided on a regular piece of paper. The gesture of drawing a circle on the user’s wrist projects an analog watch.


The current prototype system costs approximate $350 to build. Instructions on how to make your own prototype device can be found here (coming soon).

Introduction to Artificial Intelligence

Be The First To Comment
The phrase Artificial Intelligence I, which was coined by John McCarthy three decades ago, evades a concise and formal definition to date. One representative definition is pivoted around the comparison of intelligence of computing machines with human beings . Another definition is concerned with the performance of machines which "historically have been judged to lie within the domain of intelligence" . None of these definitions or the like have been universally accepted, perhaps because of their references to the word "intelligence", which at present is an abstract and immeasurable quantity. A better definition of artificial intelligence, therefore, calls for formalization of the term "intelligence". Psychologist and Cognitive theorists are of the opinion that intelligence helps in identifying the right piece of knowledge at the appropriate instances of decision making. The phrase "artificial intelligence" thus c bane defined as the simulation of human intelligence on a machine, so make the machine efficient to identify and use the right place of "Knowledge" at a given step of solving a problem. A system capable of planning and executing the right task at the right time is generally called rational . Thus, AI alternatively may be stated as a subject dealing with computational models that can think and act rationally 1, 2, 3, 4. A common question then naturally arises: Does rational thinking and acting include all possible characteristics of an intelligent system? If so, how does it represent behavioral intelligence such as machine learning, perception and planning? A little thinking, however, reveals that a system that can reason well must be a successful planner, as planning in many circumstances is part of a reasoning process. Further, a system can act rationally only after acquiring adequate knowledge from the real world. So, perception that stands for building up of knowledge from real world information is a prerequisite feature for rational actions. One step further thinking envisages that a machine without learning capability cannot possess perception. The rational action of an agent (actor), thus, calls for possession of all the elementary characteristics of intelligence. Relating artificial intelligence with the computational models capable of thinking and acting rationally, therefore, has a pragmatic significance.



Be The First To Comment
30+ Pakistani websites Defaced by Indian Cyber Leets including highprofile professional websites


Hacked websites:

http://agahi4all.com/
http://www.allgk.com/
http://awanassalam.co.cc/
http://www.bulkblogpostingservices.info/
http://cameroontraveltours.com/
http://www.chanitimber.com/
http://allgk.com/gujaratgk/
http://2gb.com.pk/
htttp://www.microtek.pk/
http://www.mwa.com.pk/
http://www.al-khairtrust.org/
http://www.theprofessionals.com.pk/
http://www.abbasikalhora.com/
http://alsehraa.com/
http://apnachhach.com/
http://www.bharakahu.com/
http://www.buttjee.com/
http://easynarrative.com/
http://www.mwa.com.pk/
http://opinionfactor.com/
http://www.pakistanmarkaz.com/
http://www.rcklaw.org/
http://www.kohat1.com/index.php
http://www.forex-mag.com/
http://www.zeeshe.com/
http://www.sajjad.net/
http://www.sigmaseo.com/
http://www.sigmaeye.com/
http://sigmawebmktg.com/
http://thezroos.com/
http://www.vogueapparel.net/
http://www.mwa.com.pk/
http://www.al-khairtrust.org/
http://www.theprofessionals.com.pk/
Mirrors :-
http://www.zone-h.net/mirror/id/17192444
http://www.zone-h.net/mirror/id/17192443
http://www.zone-h.net/mirror/id/17192442
http://www.zone-h.net/mirror/id/17192441
http://www.zone-h.net/mirror/id/17192440
http://www.zone-h.net/mirror/id/17192439
http://www.zone-h.net/mirror/id/17192438
http://www.zone-h.net/mirror/id/17192437
http://www.zone-h.net/mirror/id/17192457
http://www.zone-h.net/mirror/id/17192456
http://www.zone-h.net/mirror/id/17192455
http://www.zone-h.net/mirror/id/17192454
http://www.zone-h.net/mirror/id/17192453
http://arab-zone.net/mirror/114277/ajhindustries.com/
http://arab-zone.net/mirror/114278/noorlimited.com/web/
http://arab-zone.net/mirror/114279/sralaw.com.pk/

Business in space looks golden, says Lord British

Be The First To Comment


AUSTIN, Texas--When Richard Garriott went to space, he lost money on the deal. Next time, he wants to make a profit.

In October 2008, Garriott, a well-known video game designer, traveled as a space tourist to the International Space Station.

The son of a NASA astronaut who grew up thinking everyone goes to space--because his neighbors all had been--his dream of following in his father's footsteps was dashed when he learned as a teenager that his eyes disqualified him for the job. "Being told I was not going to be allowed to go into space," Garriott said, "was what set me on my course to prove them wrong."

As a game designer, he made a fortune, and that money, along with the emergence of a space tourism industry, rekindled his wishes. To make it to space, he had to pony up tens of millions of dollars. But as a savvy businessman, he raised some of that back with a series of commercial experiments. Just not enough to cover the whole price of his ticket.

But now, Garriott said during a presentation at South by Southwest here today, there are unprecedented business opportunities in space, many that will benefit NASA and many others that could become highly lucrative for the companies that understand how to work today's booming public/private partnerships.

According to Garriott, who is also known in the gaming business as Lord British, today's booming private space business means that the cost of a launch is being radically slashed from the days when only governments sent rockets into space. Reductions in those costs by a factor of between ten and 100--due to competition in the private space industry and the fact that NASA no longer is building its own spacecraft--will open up the opportunity for commercial activities that can go alongside the NASA projects on those ships.

To begin with, explained Garriott--who was the subject of a documentary on his trip to space, and who is part of the company Space Adventures, as well as a member of the NASA Civilian Oversight Council--the advent of a number of companies able to produce rockets means that NASA may now buy from a company like Boeing once and then turn to a competitor next time if its price is lower. And because of that, these companies are being forced to bring their costs down in order to have a chance at government business.

Another advance is that companies like SpaceX have developed boosters that can be recovered and reused after launch--unlike traditional space missions which required new boosters each time NASA sent one up. That is yet another factor in the rapid and dramatic drop in the cost of a launch. Garriott said that it should be possible to launch rockets into space for just fractions of what today's space vehicles, with their disposable boosters, cost.

And this may become even cheaper, Garriott said, if next-generation boosters are capable of landing themselves under their own power rather than having to be recovered far from the launch site. "The costs go from the hundreds of millions down to the ones of millions," he said. "That would have made [my trip to space] profitable."

Protein crystal growth and vaccine research

When he went to space the first time, Garriott was able to raise some funds by bringing along experiments in protein crystal growth. But he only had the one mission.

If the cost truly does drop into the low seven figures, Garriott suggested, it would instantly become profitable to conduct significantly more experiments, work that could easily bring in tens of millions of dollars. Among the projects that could quickly be profitable for those willing to invest would be continued work on protein crystal growth--given that microgravity seems to generate much larger, clearer crystals than on Earth--as well as work on the development of vaccines.

"It turns out that biological research is the first low-hanging fruit," Garriott said, "one of the first businesses that can be built on top of these capabilities."

Another stems from a project that has been undertaken by the Japanese to power a city by building a huge solar farm in space at a cost of around $30 billion by 2030. Garriott said that in his view, the Japanese have adopted "too grand a first goal" because it will likely be too hard to put that much mass and assembly into space.

But he does think space-bound solar power generation is profitable, albeit at a much smaller scale. So Garriott promoted the idea of putting up single-launch power generators capable of beaming power to, for example, forward military bases. "You can't power a city," he said, "because it's not as cheap [at that scale] as coal. But it's competitive to the military front lines."

At the same time, asteroids present a potential celestial gold mine. One of the benefits asteroids offer--especially because there are countless of them not far from Earth--is that it can often be possible to see through a telescope the kinds of materials they contain. That means that businesses could target specific asteroids for mining projects, particularly because cheap launches could make it profitable to begin exploring the rocks for resources.

Three decade plan

At the end of his talk, Garriott presented what he said is his 30-year plan for space exploration and business development.

In the first ten years, he said, sub-orbital tourism will take off, along with commercial low-Earth orbit research. Though NASA may take the lead on investigating asteroids, Garriott added, he expects that work to be handed off to commercial entities in the first decade.

During the second ten years, private companies may have the opportunity to help build lunar research stations that could serve as outposts for future Mars missions. As the same time, Garriott predicted, NASA could begin offering commercial prizes for building a supply chain on the surface of Mars. If, for example, NASA offered a billion dollars to the first team to build a survivable igloo on Mars, he suggested, business would jump at the chance--and the potential profits down the line.

Finally, by the third decade, NASA could begin leading mankind's charge to be a multi-planet species. But rather than sending people to Mars and then bringing them back, Garriott said it would be far more efficient and economical to create the infrastructure that will support humans on Mars and then begin to colonize the Red Planet. Trying to get people off Mars and return them is almost as hard as getting them there in the first place, Garriott said, adding, "I don't think it's worth it."

Plus, he added, finding volunteers to be Mars pioneers should be no problem. Assuming that the support infrastructure was in place, Garriott posited, many people would agree to spend the remainder of their lives on Mars. And to prove his point, he asked who in the room would volunteer for such a radical change of lifestyle. More than half the people in the room raised their hands.

Nuclear renaissance? More like nuclear standstill

Be The First To Comment
Modern nuclear power designs are safer, but that isn't enough to rekindle the long-sought nuclear renaissance.

One year after the Fukushima nuclear disaster, nuclear power is either slogging ahead or at the end of the road, depending on which country you live in. How nuclear grows in the years ahead largely depends on whether new designs can demonstrate better safety and, more importantly, compete on price.

Rather than freeze nuclear's progress, Fukushima simply made it harder to make the case for building new plants, experts say. Indeed, one of the primary barriers to a nuclear renaissance is cheap natural gas, not public opinion.

"The nuclear renaissance was a very optimistic view that many new nuclear plants would be built, but the slowdown was largely triggered by events that occurred before Fukushima," said Andrew Kadak, a former professor of nuclear science and engineering at the Massachusetts Institute of Technology. Because of new drilling techniques, natural gas prices have plummeted in the last few years, making it more attractive.

http://news.cnet.com/2300-11386_3-10011602.html

"If natural gas is currently where it is, it's difficult to justify the large capital investment for electricity production for the long term. Nuclear is a 30- or 60-year commitment," Kadak said. Also stacked against nuclear are rising construction costs and regulatory delays, he said.

At the same time, the Nuclear Regulatory Commission has been busier than ever in advancing new nuclear plants.

Late last year, it approved the Westinghouse AP1000, a plant type chosen by utilities in the U.S. and China. Last month, the NRC issued construction and operation permits--the first since 1978--for two reactors near Augusta, Ga., with expected price tags of $7 billion each. The intention is to have one running in 2016 and another in 2017.

From a financial point of view, the utilities involved at the Vogtle complex in Georgia are able to pay for the reactors because state regulators approved a fee that ratepayers are now paying on their bills. The project also gained a conditional loan guarantee of $8 billion from the Department of Energy.

Uneven global response
In other countries, the response to Fukushima has been lopsided, with some countries pulling back dramatically and others, like the U.S., only pausing to consider new safety standards.

After relying on nuclear for more than 20 percent of its electricity, Japan has stopped operation of all but two of its plants, which are expected to be unplugged, too. To replace nuclear plants, the country is relying on natural gas and renewable energy such as solar.

In another dramatic shift, Germany is phasing out its nuclear plants, which provide about one-quarter of its electricity. Switzerland stopped the advance of three planned plants while keeping its existing five operating, and Italy recently voted against nuclear expansion, according to Ann MacLachland, the European bureau chief for Platts Nuclear.

Elsewhere, though, nuclear is growing. In power-hungry China, there are more than 26 power plants already under construction, with 10 planned for Russia, and seven in India, according to the Nuclear Energy Institute. There are more than 200 new reactors under construction or planned worldwide.

What takes place in fast-growing countries with climbing electricity demand will set the direction for nuclear power's future, MacLachland said. "The key is what happens in Asia. Despite all the publicity given to nuclear phase-outs, what those two Asian countries--China and India--do in the wake of Fukushima is crucial to the shape of the world nuclear industry going forward," she said during a Webinar this week.

http://news.cnet.com/2300-11128_3-10005036.html

Many companies are working to make nuclear power safer and cheaper, but whether new technologies can achieve that is still an open question.

The AP1000 has "passive safety" features, which use gravity and convection to provide coolant for three days in the case of plant shutdown and loss of power. The European Pressurized Reactor design, now being constructed at four plants in Finland and France, also has improved safety compared with plants in operation today. Nuclear advocates say today's designs would have been able to withstand a Fukushima-like earthquake and tsunami.

There are also fourth-generation designs with underground containment, passive safety, and the ability to store 60 years worth of spent fuel. These latest designs are a "step change" in safety across the life cycle of fuel, reducing chance of accidents and proliferation, according to Forrest Rudin, the vice president of business operations at nuclear startup Hyperion. "Fukushima refocused attention to safety, which is always good," Rudin said.

To lower the cost of new nuclear, companies including startup NuScale Power and Babcock & Wilcox propose modular plants that would lower the upfront cost of construction and let utilities bring reactors into operation quicker. NuScale Power, for example, intends to build 45-megawatt reactors, an alternative to the giant 1,000-megawatt power plants which supply about 1 million U.S. homes.

Advocates say advanced nuclear technology is critical to lowering carbon emissions because the combination of renewable energy with storage can't cost-effectively replace the baseload power supplied by nuclear and fossil fuel plants.

Bill Gates, who is funding fourth-generation nuclear startup TerraPower, said earlier this month that there's no inherent reason why nuclear can't be safer than other sources of energy, but it still takes many years to demonstrate their market viability.

"There are nuclear designs, including ours, that on paper can compete on that basis," he said at the ARPA-E Energy Innovation Summit. "But getting a new nuclear design (licensed), totally figuring out the safety, and getting a demo plant--that's very hard."

One year after Fukushima, the nuclear power industry faces two different challenges. One is showing nuclear critics that industry and regulators can improve the safety of existing plants following the Fukushima disaster. The other is making a case that nuclear power should provide more energy in the future.
 

© 2011 Web Chiller - Designed by Surender and Ramesh | ToS | Privacy Policy | Sitemap

About Us | Contact Us | Write For Us