Popular Posts

Recent Post’s

Showing posts with label new technology. Show all posts
Showing posts with label new technology. Show all posts

Windows
Linux
For Windows
     First download Music bee and install. Then Add your playlists. Download and install Musicbee Remote Plugin.


For Linux

Install Clementine:
You can add the Clementine PPA and receive updates by running the command below in a terminal window (Press Ctrl+Alt+T to open the terminal):
sudo apt-add-repository ppa:me-davidsansome/clementine
So far, the PPA supports Ubuntu 14.04, Ubuntu 13.10, Ubuntu 12.10, and Ubuntu 12.04.
After added the PPA, install the player via the commands below or check for updates via System Updater:
sudo apt-get update

sudo apt-get install clementine
32-bit or 64-bit? Check it out at System Settings -> Details.
The downloaded package is .deb file, just double-click on it to open with Ubuntu Software Center and click install button to install the player.
Tip: Restart your computer if you were upgrading Clementine from a previous version.
2. Open the music player and go to menu “Tools -> Preferences”. Select Network Remote in the left pane, then do:
  • Enable remote control by ticking the box ‘Use a network remote control’
  • Depends on your need, leave the port default or change it.
  • Set LAN only or both LAN & WAN access
  • Set an authentication code, so that clients need to type the code to connect.
  • Tick ‘Allow downloads’ if you want to download songs from Clementine to Android.



3. Now on your Android device, install the ‘Clementine Remote’ app from Google Play.

Once installed, start the app, type in the IP of the remote machine and click connect.

Awesome Music Players that can operate Remotely.


Windows
Linux
For Windows
     First download Music bee and install. Then Add your playlists. Download and install Musicbee Remote Plugin.


For Linux

Install Clementine:
You can add the Clementine PPA and receive updates by running the command below in a terminal window (Press Ctrl+Alt+T to open the terminal):
sudo apt-add-repository ppa:me-davidsansome/clementine
So far, the PPA supports Ubuntu 14.04, Ubuntu 13.10, Ubuntu 12.10, and Ubuntu 12.04.
After added the PPA, install the player via the commands below or check for updates via System Updater:
sudo apt-get update

sudo apt-get install clementine
32-bit or 64-bit? Check it out at System Settings -> Details.
The downloaded package is .deb file, just double-click on it to open with Ubuntu Software Center and click install button to install the player.
Tip: Restart your computer if you were upgrading Clementine from a previous version.
2. Open the music player and go to menu “Tools -> Preferences”. Select Network Remote in the left pane, then do:
  • Enable remote control by ticking the box ‘Use a network remote control’
  • Depends on your need, leave the port default or change it.
  • Set LAN only or both LAN & WAN access
  • Set an authentication code, so that clients need to type the code to connect.
  • Tick ‘Allow downloads’ if you want to download songs from Clementine to Android.



3. Now on your Android device, install the ‘Clementine Remote’ app from Google Play.

Once installed, start the app, type in the IP of the remote machine and click connect.

The competition has three tracks: classification, classification with localization, and detection. The classificationtrack measures an algorithm’s ability to assign correct labels to an image. The classification with localization track is designed to assess how well an algorithm models both the labels of an image and the location of the underlying objects. Finally, the detection challenge is similar, but uses much stricter evaluation criteria. As an additional difficulty, this challenge includes a lot of images with tiny objects which are hard to recognize. Superior performance in the detection challenge requires pushing beyond annotating an image with a “bag of labels” -- a model must be able to describe a complex scene by accurately locating and identifying many objects in it. As examples, the images in this post are actual top-scoring inferences of the GoogleNet detection model on the validation set of the detection challenge.

Google Can Understand Everything of Your Image

The competition has three tracks: classification, classification with localization, and detection. The classificationtrack measures an algorithm’s ability to assign correct labels to an image. The classification with localization track is designed to assess how well an algorithm models both the labels of an image and the location of the underlying objects. Finally, the detection challenge is similar, but uses much stricter evaluation criteria. As an additional difficulty, this challenge includes a lot of images with tiny objects which are hard to recognize. Superior performance in the detection challenge requires pushing beyond annotating an image with a “bag of labels” -- a model must be able to describe a complex scene by accurately locating and identifying many objects in it. As examples, the images in this post are actual top-scoring inferences of the GoogleNet detection model on the validation set of the detection challenge.

When Google acquired Word Lens in May 2014, it was clear that it was only a matter of time until the startup’s impressive visual translation technology would be folded into Translate. That moment is coming today – Word Lens integration and improved voice translations are coming in the latest Google Translate update.
Word Lens lets you point your smartphone to a foreign language text and have it instantly replaced with your language of choice, right on the screen. Until this update, you could scan text with your device and have it translated and displayed into a text box, a clunky experience in most cases. Word Lens removes that friction, and everything happens in real time. Street signs, restaurant menus, product labels, there are tons of situations you could find it useful.
This genuinely amazing capability will be available in English, coupled with French, German, Italian, Portuguese, Russian, or Spanish. That means you will be able to translate from English to French, and the other way, but not from French to Russian, for instance. Google says more languages are coming.
The second big feature in this update is instant voice translation. Before the update, translating speech required tapping the mic button each time someone said something, as well as switching between languages in order to accommodate the other speaker. Now that all happens on the fly, because Translate understands different languages without requiring your input.

Download with Google Play

Get Smart with Google Translate: Word Lens and instant voice translations and Language Detection

When Google acquired Word Lens in May 2014, it was clear that it was only a matter of time until the startup’s impressive visual translation technology would be folded into Translate. That moment is coming today – Word Lens integration and improved voice translations are coming in the latest Google Translate update.
Word Lens lets you point your smartphone to a foreign language text and have it instantly replaced with your language of choice, right on the screen. Until this update, you could scan text with your device and have it translated and displayed into a text box, a clunky experience in most cases. Word Lens removes that friction, and everything happens in real time. Street signs, restaurant menus, product labels, there are tons of situations you could find it useful.
This genuinely amazing capability will be available in English, coupled with French, German, Italian, Portuguese, Russian, or Spanish. That means you will be able to translate from English to French, and the other way, but not from French to Russian, for instance. Google says more languages are coming.
The second big feature in this update is instant voice translation. Before the update, translating speech required tapping the mic button each time someone said something, as well as switching between languages in order to accommodate the other speaker. Now that all happens on the fly, because Translate understands different languages without requiring your input.

Download with Google Play


Many of us think of the Internet as a global community. But two-thirds of the world’s population does not yet have Internet access. Project Loon is a network of balloons traveling on the edge of space, designed to connect people in rural and remote areas, help fill coverage gaps, and bring people back online after disasters.


Google has partnered with the French space agency, the Centre National d'Etudes Spatiales, or CNES, with a goal of reaching higher ground with its Project Loon initiative.

Project Loon is essentially a program by Google to bring free Internet to developing countries through low-flying weather balloons that project Wi-Fi signals.

The two companies remained relatively quiet about their plans to partner, although CNES did reveal that Google would be taking advantage of the space agency's expertise in balloon technology. Google, on the other hand, will conduct long-running balloon campaigns as a part of CNES' study of the ozone and stratosphere.


Project Loon was first conceived by Google X, the division of the company that is dedicated to "moon shots," or projects that are ahead of their time and may not have immediate impact but have a high potential for future payout.

See More About Loon 

The partnership may have impacts beyond Project Loon, however. Google has been under increased scrutiny in Europe, with the European Parliament promoting a breakup of Google and Google chosing to shut down Google News in Spain after new laws were passed requiring the company to pay fees to the publications from which it takes news snippets.

Google X is known for a number of other interesting projects. For example, that division of Google is behind Google Glass, an augmented reality headset that allows users to perform many functions without having to reach for their phone. It's also behind Project Ara, which is a modular smartphone that allows users to remove and replace certain components of a smartphone to be upgraded as technology gets better.

Faq about Loon Project

New Technology : Global WiFi Network using Google Loon.

Many of us think of the Internet as a global community. But two-thirds of the world’s population does not yet have Internet access. Project Loon is a network of balloons traveling on the edge of space, designed to connect people in rural and remote areas, help fill coverage gaps, and bring people back online after disasters.


Google has partnered with the French space agency, the Centre National d'Etudes Spatiales, or CNES, with a goal of reaching higher ground with its Project Loon initiative.

Project Loon is essentially a program by Google to bring free Internet to developing countries through low-flying weather balloons that project Wi-Fi signals.

The two companies remained relatively quiet about their plans to partner, although CNES did reveal that Google would be taking advantage of the space agency's expertise in balloon technology. Google, on the other hand, will conduct long-running balloon campaigns as a part of CNES' study of the ozone and stratosphere.


Project Loon was first conceived by Google X, the division of the company that is dedicated to "moon shots," or projects that are ahead of their time and may not have immediate impact but have a high potential for future payout.

See More About Loon 

The partnership may have impacts beyond Project Loon, however. Google has been under increased scrutiny in Europe, with the European Parliament promoting a breakup of Google and Google chosing to shut down Google News in Spain after new laws were passed requiring the company to pay fees to the publications from which it takes news snippets.

Google X is known for a number of other interesting projects. For example, that division of Google is behind Google Glass, an augmented reality headset that allows users to perform many functions without having to reach for their phone. It's also behind Project Ara, which is a modular smartphone that allows users to remove and replace certain components of a smartphone to be upgraded as technology gets better.

Faq about Loon Project


Hardware
The Leap Motion Controller is actually quite simple. The heart of the device consists of two stereo cameras and three infrared LEDs. These track infrared light with a wavelength of 850 nanometers, which is outside the visible light spectrum.

the device has a large interaction space of eight cubic feet, which takes the shape of an inverted pyramid – the intersection of the binocular cameras’ fields of view. The Leap Motion Controller’s viewing range is limited to roughly 2 feet (60 cm) above the device. This range is limited by LED light propagation through space, since it becomes much harder to infer your hand’s position in 3D beyond a certain distance. LED light intensity is ultimately limited by the maximum current that can be drawn over the USB connection.


At this point, the device’s USB controller reads the sensor data into its own local memory and performs any necessary resolution adjustments. This data is then streamed via USB to the Leap Motion tracking software.

Because the Leap Motion Controller tracks in near-infrared, the images appear in grayscale. Intense sources or reflectors of infrared light can make hands and fingers hard to distinguish and track. This is something that we’ve significantly improved with our v2 tracking beta, and it’s an ongoing process.

Software

Once the image data is streamed to your computer, it’s time for some heavy mathematical lifting. Despite popular misconceptions, the Leap Motion Controller doesn’t generate a depth map – instead it applies advanced algorithms to the raw sensor data.

The Leap Motion Service is the software on your computer that processes the images. After compensating for background objects (such as heads) and ambient environmental lighting, the images are analyzed to reconstruct a 3D representation of what the device sees.






Next, the tracking layer matches the data to extract tracking information such as fingers and tools. Our tracking algorithms interpret the 3D data and infer the positions of occluded objects. Filtering techniques are applied to ensure smooth temporal coherence of the data. The Leap Motion Service then feeds the results – expressed as a series of frames, or snapshots, containing all of the tracking data – into a transport protocol.


Through this protocol, the service communicates with the Leap Motion Control Panel, as well as native and web client libraries, through a local socket connection (TCP for native, WebSocket for web). The client library organizes the data into an object-oriented API structure, manages frame history, and provides helper functions and classes.

From there, the application logic ties into the Leap Motion input, allowing a motion-controlled interactive experience. Next week, we’ll take a closer look at our SDK and getting started with our API.

New Tchnology : How Does Work Leap Motion ?

Hardware
The Leap Motion Controller is actually quite simple. The heart of the device consists of two stereo cameras and three infrared LEDs. These track infrared light with a wavelength of 850 nanometers, which is outside the visible light spectrum.

the device has a large interaction space of eight cubic feet, which takes the shape of an inverted pyramid – the intersection of the binocular cameras’ fields of view. The Leap Motion Controller’s viewing range is limited to roughly 2 feet (60 cm) above the device. This range is limited by LED light propagation through space, since it becomes much harder to infer your hand’s position in 3D beyond a certain distance. LED light intensity is ultimately limited by the maximum current that can be drawn over the USB connection.


At this point, the device’s USB controller reads the sensor data into its own local memory and performs any necessary resolution adjustments. This data is then streamed via USB to the Leap Motion tracking software.

Because the Leap Motion Controller tracks in near-infrared, the images appear in grayscale. Intense sources or reflectors of infrared light can make hands and fingers hard to distinguish and track. This is something that we’ve significantly improved with our v2 tracking beta, and it’s an ongoing process.

Software

Once the image data is streamed to your computer, it’s time for some heavy mathematical lifting. Despite popular misconceptions, the Leap Motion Controller doesn’t generate a depth map – instead it applies advanced algorithms to the raw sensor data.

The Leap Motion Service is the software on your computer that processes the images. After compensating for background objects (such as heads) and ambient environmental lighting, the images are analyzed to reconstruct a 3D representation of what the device sees.






Next, the tracking layer matches the data to extract tracking information such as fingers and tools. Our tracking algorithms interpret the 3D data and infer the positions of occluded objects. Filtering techniques are applied to ensure smooth temporal coherence of the data. The Leap Motion Service then feeds the results – expressed as a series of frames, or snapshots, containing all of the tracking data – into a transport protocol.


Through this protocol, the service communicates with the Leap Motion Control Panel, as well as native and web client libraries, through a local socket connection (TCP for native, WebSocket for web). The client library organizes the data into an object-oriented API structure, manages frame history, and provides helper functions and classes.

From there, the application logic ties into the Leap Motion input, allowing a motion-controlled interactive experience. Next week, we’ll take a closer look at our SDK and getting started with our API.



Austrian designer Kristof Retezár has submitted this self-filling water bottle, dubbed Fontus, for award consideration to the James Dyson Foundation. His proposal cites potential benefits both to athletes but also more broadly to regions where obtaining potable water can be difficult (in many cases, these are also places where many travel by bicycle).
While clean water may be tragically scarce for many people here on Earth’s surface, in the atmosphere, thousands of cubic kilometers of life-giving H2O surround us, just there in the air, ripe for the taking. With his Fontus self-filling water bottle, Austrian industrial designer Kristof Retezár is trying to tap that resource.
Here’s how it works. Users attach a half-liter bottle to the device and mount it onto their bicycle. As the bike moves, Fontus then collects, cools, and condenses air into moisture through solar power. Fresh water, now separated from air molecules, drips into the bottle, and with the right humidity, Retezár claims cyclists can produce around 16 ounces of water per hour.

But beyond quenching users’ thirsts, Retezár explains his more humanitarian goals for the project on the device’s entry page for the James Dyson student design awards. He hopes to use the technology behind Fontus to help harvest more water for the over two billion people living in regions desperately in need of it.

 
How does it work? “Basically, condensation occurs when you cool air to its saturation point. Fontus has a small internal cooler that is divided into two halves. A solar panel provides energy to cool the upper half of the condenser, a process that heats the lower half. When air flows past the heated lower half, it makes the top cool even further. Air moving through the chambers is slowed and cooled to condense moisture, which drips down into the bottle.”

The inspiration: “According to UN statistics, More than 2 billion people in more than 40 countries live in regions with water scarcity. In 2030, 47% of the world´s population will be living in areas of high water stress. Water scarcity may be the most underestimated resource issue facing the world today. Every measure to ease this upcoming crisis is a welcome one.”

For now, it is a work in progress – whether this design hits mass-production without kinks or complications remains to be seen, particularly given the difficulty of distilling liquid water from air moisture. That said, the process does have a long history in various forms. “Harvesting water from the air is a method that has been practised for more than 2000 years in certain cultures mostly in Asia and Central America. The Earth’s atmosphere contains around 13.000 km3 of mostly unexploited freshwater. This project is an attempt to discover these resources. My goal was to create a small, compact and self-sufficient device able to absorb humid air, separate water molecules from air molecules and store water in liquid form in a bottle.”

New Technology : Fontus self filling water bottle from thin air


Austrian designer Kristof Retezár has submitted this self-filling water bottle, dubbed Fontus, for award consideration to the James Dyson Foundation. His proposal cites potential benefits both to athletes but also more broadly to regions where obtaining potable water can be difficult (in many cases, these are also places where many travel by bicycle).
While clean water may be tragically scarce for many people here on Earth’s surface, in the atmosphere, thousands of cubic kilometers of life-giving H2O surround us, just there in the air, ripe for the taking. With his Fontus self-filling water bottle, Austrian industrial designer Kristof Retezár is trying to tap that resource.
Here’s how it works. Users attach a half-liter bottle to the device and mount it onto their bicycle. As the bike moves, Fontus then collects, cools, and condenses air into moisture through solar power. Fresh water, now separated from air molecules, drips into the bottle, and with the right humidity, Retezár claims cyclists can produce around 16 ounces of water per hour.

But beyond quenching users’ thirsts, Retezár explains his more humanitarian goals for the project on the device’s entry page for the James Dyson student design awards. He hopes to use the technology behind Fontus to help harvest more water for the over two billion people living in regions desperately in need of it.

 
How does it work? “Basically, condensation occurs when you cool air to its saturation point. Fontus has a small internal cooler that is divided into two halves. A solar panel provides energy to cool the upper half of the condenser, a process that heats the lower half. When air flows past the heated lower half, it makes the top cool even further. Air moving through the chambers is slowed and cooled to condense moisture, which drips down into the bottle.”

The inspiration: “According to UN statistics, More than 2 billion people in more than 40 countries live in regions with water scarcity. In 2030, 47% of the world´s population will be living in areas of high water stress. Water scarcity may be the most underestimated resource issue facing the world today. Every measure to ease this upcoming crisis is a welcome one.”

For now, it is a work in progress – whether this design hits mass-production without kinks or complications remains to be seen, particularly given the difficulty of distilling liquid water from air moisture. That said, the process does have a long history in various forms. “Harvesting water from the air is a method that has been practised for more than 2000 years in certain cultures mostly in Asia and Central America. The Earth’s atmosphere contains around 13.000 km3 of mostly unexploited freshwater. This project is an attempt to discover these resources. My goal was to create a small, compact and self-sufficient device able to absorb humid air, separate water molecules from air molecules and store water in liquid form in a bottle.”


Belts are so darn boring. However, without them, our ill-fitting pants would be down by our ankles most of the time, not a good look if you’re walking into a job interview or delivering an important speech on global warming. Get spotted in the wrong place at the wrong time and you could even end up spending a night in the cells.

Thankfully, Nifty – a UK-based startup that made a name for itself with its MiniDrive storage solution for the MacBook – is threatening to breathe new life into the humble waist-based loop. The team has come up with an innovative design that incorporates battery-charging tech, offering the pants-wearing public a new way to keep their mobile device at full power while they’re dashing about in their comfortably fitting trousers.

Related Post - Your DNA will Store on Cloude 



The XOO Belt (pronounced ‘zoo’) is wearable tech that you might actually want to wear – especially if running out of smartphone juice is an issue for you. And because it’s slung around your body, you’ll have one less thing to carry when you go out.

“It looks, feels and weighs about the same as a really nice belt….but comes with a mighty 2,100mAh of hidden charge and can charge pretty much any device,” the Nifty team says.

Designed with a new breed of lithium ceramic polymer flexible battery, the belt is said to be safe, durable, and weather-resistant, and weighs “about the same” as a regular belt.

While the flexible part of the battery lives inside the belt strap, the rest is contained in the buckle. The charging wire runs alongside the inside of the belt when it’s not in use, with magnetism holding it in place.


 
 
You charge it the same way you would your smartphone, and five discretely placed LEDs on the buckle indicate power level. According to Nifty, the belt will fully charge, for example, an iPhone 6 in about 2.5 hours from empty.

Nifty’s XOO Belt is part of a recently launched Indiegogo crowdfunding campaign, so it’s not ready just yet. However, should backers stump up a total of $50,000 by December 18, the company plans to start shipping the product in July with a $155 price tag, though early backers can, of course, get a better deal.

New Technology : Charge Your Cell Phone Easily Using Your Belt


Belts are so darn boring. However, without them, our ill-fitting pants would be down by our ankles most of the time, not a good look if you’re walking into a job interview or delivering an important speech on global warming. Get spotted in the wrong place at the wrong time and you could even end up spending a night in the cells.

Thankfully, Nifty – a UK-based startup that made a name for itself with its MiniDrive storage solution for the MacBook – is threatening to breathe new life into the humble waist-based loop. The team has come up with an innovative design that incorporates battery-charging tech, offering the pants-wearing public a new way to keep their mobile device at full power while they’re dashing about in their comfortably fitting trousers.

Related Post - Your DNA will Store on Cloude 



The XOO Belt (pronounced ‘zoo’) is wearable tech that you might actually want to wear – especially if running out of smartphone juice is an issue for you. And because it’s slung around your body, you’ll have one less thing to carry when you go out.

“It looks, feels and weighs about the same as a really nice belt….but comes with a mighty 2,100mAh of hidden charge and can charge pretty much any device,” the Nifty team says.

Designed with a new breed of lithium ceramic polymer flexible battery, the belt is said to be safe, durable, and weather-resistant, and weighs “about the same” as a regular belt.

While the flexible part of the battery lives inside the belt strap, the rest is contained in the buckle. The charging wire runs alongside the inside of the belt when it’s not in use, with magnetism holding it in place.


 
 
You charge it the same way you would your smartphone, and five discretely placed LEDs on the buckle indicate power level. According to Nifty, the belt will fully charge, for example, an iPhone 6 in about 2.5 hours from empty.

Nifty’s XOO Belt is part of a recently launched Indiegogo crowdfunding campaign, so it’s not ready just yet. However, should backers stump up a total of $50,000 by December 18, the company plans to start shipping the product in July with a $155 price tag, though early backers can, of course, get a better deal.