We chose as a group that we would work less in this sprint due to the exam period that expanded almost 2 weeks, therefore we haven’t been able to do that much this sprint as we would have liked.

Block programming

Block server

In this sprint we have implemented a way to retrieve code from the blockly server with the help of HTTP calls and a library called Flask. We don’t have any security for this code that comes from the HTTP call. This will be discussed if we need to have it. The server can retrieve code and then add this to a queue. We can then run each program by calling extract from the queue. As of right now we don’t have any time limit for a program. Meaning that if the input is a while loop that will run forever, then the program will run forever. We plan on having some time limit for a program. We don’t know currently how long this time limit should be or if it should be dynamically changed based on how large the input program is.

We have also implemented gestures for arms, along three major axes. Together with the blockly group we have been able to test sending a block program from the blockly site, to the block server and then executing that code.

Blockly

As requested by Luleå Makerspace, the website is now translated to Swedish and the upcoming weeks users will be able to choose between English and Swedish as the language of the site via a button or menu. The implementation will be modifiable and therefore make it as easy as possible to add support for new languages in the future. The code generators for current custom blocks are finished as well, new blocks will be implemented over time and code generators for them after that.

A minimalistic python-server was set up while waiting for the Block server to be fully operational to test that the generated code actually outputs whats intended adn can send it to the server. This is now working and the Blockly site along with the Block server will be ready to work together for the demo on November the 19th.

Web interaction

Just like some of the other groups we haven’t assigned that much time on pepper due to the exam period. The main thing we’ve done is to get the interaction with wikipedia to work as intended. To search in wikipedia we have developed a listen function which listens to your question and then pepper will with the help of and wikipedia API read about it from wikipedia. For now it only works with english as language and we have tried to get it to work with swedish, but with no success so far.

For the next sprint we have planned to use our listen function to achieve other interactions with the web like showing pictures from google or videos from youtube which can be shown on pepper’s display. And this will work in a similar way with the help of APIs to search the web. On top this we will look more into how we can do interactions with the web children friendly and safe.

Rock-paper-scissors

During this sprint we have looked into handtracking and alternative ways to allow us to take acceptable pictures to send to the image recognition network. We found handtracking quite difficult as NaoQI does not include any handtracking functionality and the only similar functionality we found was color blob tracking. However during testing the color blob tracking, Pepper only managed to track faces instead, which might be an issue with the parameters or that hands are not suitable for that type of tracking.

Next sprint we plan to continue to look into tracking and other solutions to get acceptable pictures and also look into how we would send these pictures to the image recognition network.

Image recognition

In this sprint we began to implement a log for all our neural networks tests, using different hyperparameters. The log consist of .csv files where we store different hyperparameters and their results, i.e. the accuracy of the network. A struggle for our group has been that we have not been able to utilize the GPU cluster since it has been down due to maintenance, hindering testing of neural networks and their hyperparameters.