Google AIY is an artificial intelligence project that aims to develop some artificial intelligence DIY projects. There are currently two projects, one is a voice kit and the other is a video kit. At present, I only bought the AIY voice kit. Let's just unpack it and see what it is.

The outer packaging is very simple. The front is the finished product picture. The back is the internal parts, including a main circuit board Voice HAT, microphone, speaker, an arcade style button, two pieces of paper, and some connecting wires.

There is a very thick manual with detailed assembly instructions and some gameplay introductions. It can be said to be a magazine. In fact, this kit was originally a gift for the first issue of Map Pi magazine.

Voice HAT is the core of this kit, speakers, microphones, buttons, etc. are connected to it. And it is connected to the Raspberry Pi. For this thing, I bought a Raspberry Pi.

Well, start assembling according to the instructions. First plug the Voice HAT into the Raspberry Pi.

Then start to connect the speaker. The green place on the Voice HAT can be connected to a speaker. The positive and negative wires of the speaker are inserted into the screw holes and the bolts are tightened. In fact, a speaker can be connected to the right side, but there is no welding in this place. That’s a regret.

Now start to connect the microphone. This is easier that just plug it in and it's OK.

The most complicated origami paper starts below, the first is the inner paperboard.

Then fold the outer cardboard and tuck the inner side.

OK, the last thing, the arcade-style buttons can be installed. This button is really good. In fact, there is an LED inside.

Then fix the microphone on it and align it with the hole on it. The manual says to use double-sided tape. Without this thing, it is directly fixed with tape.

Ok, seal it, and the assembly is complete.

The next step is to insert the card, turn it on, and... of course, here is a smart speaker. Compared with Google Home Mini, it is a bit big.

5G will change the world, which will not shock people, not just publicity. why? Because the capabilities of 5G technology can change existing technologies in unimaginable ways.

Research shows that by 2035, 5G is expected to provide 12.3 trillion U.S. dollars in global economic output and support 22 million jobs worldwide, with huge potential. This technology will not only support devices, but can also change lives. In addition to mobile device technology, we also see that the fields of artificial intelligence, the Internet of Things and robotics will be affected by 5G. In this article, we will explore the potential of 5G in these areas.

Self-driving car

As the Internet of Things binds our physical world and brings its entities to the digital platform, 5G is critical to its sustainability. From finding obstacles, interacting with smart signs and following maps, to establishing communication between other manufacturers/cars, the responsibility of these cars is huge.

All of this can only happen when large amounts of data are transmitted and processed in real time. To this end, a network with equal speed and potential is needed, and 5G seems to be able to provide such a network. 5G has high capacity, low latency, and safety, all of which must put self-driving cars on the road.

Smart City

The city we will have in the future will be different from the city we live in today. They will include connected devices, interactive autonomous vehicles, on-demand smart buses, driverless taxis, etc. Smart cities will also include smart buildings, which will enable companies to increase efficiency by regulating energy consumption.

Data from these cities will help us understand how to use resources in a specific area and how to optimize resource use. Although the possibilities are endless, we will need the next-generation network-5G to make it a reality.

IoT technology

The Internet of Things has begun to change the world, but the integration of 5G will completely change it. It will connect billions of other devices to the Internet. Although the home Internet of Things has great potential, the real deal lies in the Industrial Internet of Things.

From manufacturing, agriculture to retail, healthcare, etc., the Internet of Things will be omnipresent. 5G will fully expand its coverage. For example, 5G in healthcare will enable robotic surgery, personalized medicine, wearable healthcare, etc.

robot technology

We all know the potential that robotics brings to the industry, but many people may not know what can be done with 5G collaboration. In order to operate efficiently, robots need to exchange large amounts of data with systems and employees. To this end, the capacity and capabilities of 5G networks are required.

For example, in agriculture, robots can easily monitor the condition of crop fields and send near real-time video and information back to farmers. After receiving the instructions, the robot can perform the required operations, such as trimming, spraying or harvesting crops. They can also measure features and transmit them to remote scientists.

Why is it so important? The world’s population is growing, and our needs are also growing. In order to maintain food supplies, new technologies need to be brought to the field.

AI entertainment

One obvious use of 5G networks is to address and support the growing demand for mobile video. The data capacity, speed and low latency of the network will promote innovative entertainment methods, including virtual reality and augmented reality. We may see a lot of innovation in AR and VR, but not only in the entertainment field-companies will also see benefits.

AI, Internet of Things and 5G – Why?

We also see a lot of confusion hovering over AI and IoT. One thing we all understand is that all of this boils down to data and processes large amounts of data in real time.

However, we do not yet have a network that can support this function, but 5G promises:

Low power consumption

Utilize IoT sensors that will last a long time

Compared with 4G, supports more devices

Provides incredible high-speed data connections.

Deliver data with low latency so that more data can be processed.

From predictive maintenance and cost reduction to problem solving/making necessary changes, 5G will revolutionize the industry.

Network optimization and distribution

For example, 5G will enable network slicing, during which you can use a portion of the network bandwidth to prioritize and meet specific needs. This means that the network can be appropriately sliced and distributed among participants according to the priority of the task and used for a given task.

5G's low latency

Remember, 5G is definitely about speed, not speed. Low latency enables 5G networks to provide very near real-time video transmission for sports or security purposes. In industries such as construction and healthcare, where regular and real-time coordination is a key industry, this feature may prove to be extremely beneficial.

In the field of construction, low latency enables effective video conferencing between members to complete work.

In medical care, medical service providers can monitor patients with the same efficiency even when they are outside the hospital.

Edge artificial intelligence with low latency, high efficiency and low consumption

Edge TPU is a supplement to Google Cloud TPU and Google cloud services. It provides end-to-end, cloud-to-end, hardware + software basic architecture to facilitate the deployment of AI-based solutions for customers.

Edge TPU can be used in more and more industrial application scenarios, such as predictive maintenance, anomaly detection, machine vision, robotics, voice recognition, etc. It can be used in manufacturing, on-premises, healthcare, retail, smart spaces, transportation, etc.

LG's internal IT service department has tested Edge TPUs and plans to use them on testing equipment in the product line. Shingyoon Hyun, chief technology officer of LG CNS organization, said that the current LG inspection device processes more than 200 display panel images per second, and all problems are manually inspected. The accuracy of the existing system is about 50%, and Google AI can Increase the accuracy to 99.9%.

Model Play is an AI model resource platform for global developers, with built-in diversified AI models, compatible with Tiorb AIX, and supports Google Edge TPU edge artificial intelligence computing chips, accelerating professional development.

In addition, Model Play provides a complete and easy-to-use migration learning model training tool and a wealth of model examples, which can be perfectly matched and combined with Tiorb AIX to realize the rapid development of various artificial intelligence applications. Based on Google's open source neural network architecture and algorithm, the autonomous migration learning function is built. Users do not need to write code. AI model training can be completed by selecting pictures, defining models and category names, realizing easy learning and easy development of artificial intelligence.

Google Coral USB accelerator is a USB device that provides Edge TPU as a computer co-processor. When connected to a Linux, Mac or Windows host, it can speed up the reasoning speed of machine learning models.

All you need to do is download the Edge TPU runtime and TensorFlow Lite library on the computer connected to the USB Accelerator. Then, use the sample application to perform image classification.

System Requirements:

A computer with one of the following operating systems:

· Linux Debian 6.0 or higher, or any of its derivatives (such as Ubuntu 10.0+), and x86-64 or ARM64 system architecture (support Raspberry Pi, but we only tested Raspberry Pi 3 Model B + and Raspberry Pi 4)

· MacOS 10.15 with MacPorts or Homebrew installed

· Windows 10

-A usable USB port (for best performance, please use a USB 3.0 port)

-Python 3.5, 3.6 or 3.7

Operating Procedures

1. Install Edge TPU runtime

Edge TPU runtime is required to communicate with Edge TPU. You can install it on the host, Linux, Mac or Windows by following the instructions below.

1) Linux system

① Add the official Debian package to your system;

② Install Edge TPU runtime:

Connect the USB Accelerator to the computer using the included USB 3.0 cable. If it is inserted, please delete and re-insert it to make the newly installed udev rules take effect.

※ Install at maximum working frequency (optional)

The above command will install Linux's standard Edge TPU runtime, which will run the device at the default clock frequency. You can install the runtime version, which runs at maximum frequency (twice the default value). This can increase the speed of inference, but at the same time also increase power consumption. USB Accelerator will become very hot.

If you are not sure whether your application needs to improve performance, you should use the default operating frequency. Otherwise, you can install the maximum frequency runtime as follows:

sudo apt-get install libedgetpu1-max

You cannot install two versions of the runtime at the same time, but you can switch by simply installing the alternate runtime, as shown above.

Note: When operating the device at maximum frequency, the metal on the USB Accelerator may become very hot. This may cause burns. To avoid injury, keep the device out of reach when operating the device at the maximum frequency, or use the default frequency.

2) Mac system

① Download and unzip the Edge TPU runtime

② Install Edge TPU runtime

The installation script will ask if you want to enable the maximum operating frequency. Running at the maximum operating frequency will increase the speed of inference, but it will also increase power consumption and make the USB Accelerator very hot. If you are not sure that your application needs to improve performance, you should type "N" to use the default operating frequency.

You can read more about performance settings in the official USB Accelerator data sheet.

Now, use the included USB 3.0 cable to connect the USB Accelerator to the computer. Then continue to install the TensorFlow Lite library.

3) Windows system:

① Click to download the latest official compressed package. Unzip the ZIP file, and then double-click the install.bat file.

A console window will open to run the installation script, and it will ask if you want to enable the maximum operating frequency. Running at the maximum operating frequency will increase the speed of inference, but it will also increase power consumption and make the USB Accelerator very hot. If you are not sure that your application needs to improve performance, you should type "N" to use the default operating frequency.

You can read more about performance settings in the Coral USB Accelerator data sheet provided by Google.

Now, use the included USB 3.0 cable to connect the USB Accelerator to the computer.

2. Install the TensorFlow Lite library

There are multiple ways to install the TensorFlow Lite API, but to start using Python, the easiest option is to install the tflite_runtime library. The library provides the most basic code (mainly Interpreter API) required to run inference using Python, which saves a lot of disk space.

To install it, follow the TensorFlow Lite Python quick start and then return to this page after running the pip3 install command.

3. Use the TensorFlow Lite API to run the model

It is now possible to infer on the Edge TPU. Perform image classification using sample code and models.

1) From GitHub: Download the sample model

2) Download bird classifier models, label files and bird photos

3) Run the image classifier using photos of birds

The inferred speed may vary depending on the host system and whether USB 3.0 connection is used.

To run other types of neural networks, check out the official sample projects, including examples of performing real-time object detection, pose prediction, key phrase detection, on-device transfer learning, and more.

AI hardware and software supporting Google Edge TPU

The Model Play and Tiorb AIX developed by Gravitylink can perfectly support Edge TPU. AIX is an artificial intelligence hardware that integrates the two core functions of computer vision and intelligent voice interaction. The built-in AI acceleration chip (Coral Edge TPU/intel movidius) supports edge deep learning inference and provides a reliable Performance support.

Model Play is an AI model resource platform for global developers, built-in diversified AI models, combined with Tiorb AIX, based on Google open source neural network architecture and algorithms, to build autonomous transfer learning functions without writing code, by selecting pictures, defining models and the category name to complete the AI model training.