Icon for: Whitney Crooks

WHITNEY CROOKS

Tufts University
Years in Grad School: 1
Judges’ Queries and Presenter’s Replies
  • Icon for: Hainsworth Shin

    Hainsworth Shin

    Judge
    Faculty
    May 21, 2013 | 12:27 a.m.

    nice job with the video and poster. I was wondering how the development of these social networking testing platforms may be used to determine test soft materials or determine the specifications for soft materials to be used for robots. Thanks.

  • Icon for: Whitney Crooks

    Whitney Crooks

    Lead Presenter
    May 21, 2013 | 12:23 p.m.

    Thank you. We have not yet thought about exploring different types of soft materials in our robots because we’re using the soft material from our 3D printer. However, I think the question you raise is very interesting and one we should explore. In the future, we would like to have several arenas running simultaneously, and it would be very interesting to see how gait, or crawling, in our Softworms changed for different soft materials.

  • Icon for: Peter Pfromm

    Peter Pfromm

    Judge
    Faculty: Project Co-PI
    May 21, 2013 | 11:37 a.m.

    Thanks for the “creepy” video! It seems I did overlook in your materials if you developed and built the robot, or who did? You are referring to non-linear properties of “soft” robot materials, what might those properties be? Is the non-linearity significant in the deformation regimes that you are using?

  • Icon for: Whitney Crooks

    Whitney Crooks

    Lead Presenter
    May 21, 2013 | 12:54 p.m.

    Thank you for watching. The robot was developed by Drs. Vishesh Vikas and Takuya Umedachi, who both work under Dr. Barry Trimmer in the Neuromechanics and Biomimetic Devices Laboratory at Tufts University. The material we are using is viscoelastic and since the actuation of the SMAs applies a cyclic load on the material, hysteresis will likely occur in the stress-strain curve (since we are just getting the Softworm arena up and running, this is our hypothesis based on other studies of soft materials). In addition, the stiffness of the material will likely change overtime as the SMAs are actuated repeatedly. The force the SMAs generate is relatively small so I would expect the non-linearity to be present but not necessarily significant in the current arena, which tests the gait. However, as we add obstacles that the robot will have to squeeze through, navigate under, etc., I would expect the non-linearity to play a significant role.

  • Icon for: Mary Albert

    Mary Albert

    Judge
    Faculty: Project Co-PI
    May 21, 2013 | 12:55 p.m.

    Good job on the poster and video! In the poster you say that you will be able to use data collected from the user interactions to create a genetic algorithm for the robot. Do you think that the fact that the users can visually see the maze will create a genetic algorithm that is primarily suited to this particular maze?

  • Icon for: Whitney Crooks

    Whitney Crooks

    Lead Presenter
    May 21, 2013 | 02:25 p.m.

    Thank you! Your question raises a great point and one that I hadn’t really considered before. With the current arena, we’re just trying to optimize the gait so it shouldn’t be a problem that the users can see the whole maze. We had two ideas for when we start adding obstacles: 1) we switch out the courses every X number of weeks and 2) we run different arenas simultaneously. We like the second idea because players could “unlock” new arenas the more they play. Your question also raised another potential issue with the current set up: if we deploy the robot somewhere where it’s blind or has limited vision, how do we ensure that the genetic algorithm will work? I think that it would be interesting to see if users who couldn’t see the full maze came up with a similar algorithm to users who could see the entire arena.

  • Icon for: Christopher Buneo

    Christopher Buneo

    Judge
    Faculty: Project Co-PI
    May 21, 2013 | 02:22 p.m.

    Well done! You mentioned that currently no framework exists for identifying general control solutions for structures such as your Softworm. Besides the genetic algorithm approach, what other alternatives are available for identifying control schemes for soft robots?

  • Icon for: Whitney Crooks

    Whitney Crooks

    Lead Presenter
    May 21, 2013 | 03:09 p.m.

    Thank you. As far as other approaches go, we could simply use the user generated methods we identify as “best” (i.e. fastest motion in a straight line or least number of actuations required to take a turn) through data analysis to hard code a control algorithm. Control strategies could also be generated by modeling the Softowrm and running it through virtual arenas, but we felt that method was too intensive.

  • Icon for: Karen McDonald

    Karen McDonald

    Judge
    Faculty: Project PI
    May 21, 2013 | 07:23 p.m.

    I really like the concept and love the fact that the general public can log in and manipulate the devices (robot or worm). My question is from the interface it looks like the control is primarily discrete (buttons to turn things on or off instead of continuous) so how does that influence your ability to discover optimal control strategies which are likely to require a continuous process in which speed can be controlled for example. Are you trying to capture human type control strategies and how does the nature of the manipulated variable interface influence your ability to determine the algorithms?

  • Icon for: Whitney Crooks

    Whitney Crooks

    Lead Presenter
    May 21, 2013 | 08:24 p.m.

    The LEGO robot served primarily to test the website so the controls were kept very simple. With the Softworm, we aren’t sure what input parameters will create the fastest and slowest speeds, which is primarily what we’re interested in determining with the current arena. However, I think you raise a very interesting point. It might be helpful to users for the robot to run continuously until one of the parameters is changed, and also helpful to us in developing algorithms because most robots do run continuously as you mentioned. We have discussed using different controls when we add obstacles, which will better lend themselves to the continuous environment you are talking about. For example, the move forward button would have a speed control and the turn left/right button would have an angle control. We are interested in learning about how humans control the robot and comparing that to algorithms generated by the computer, but I think you’re right that we should carefully consider how the controls we give users influence the algorithms they come up with. I think the best way to eliminate this error would be to test a variety of control interfaces to determine whether or not there is an effect.

Presentation Discussion
  • Icon for: Margaret Garcia

    Margaret Garcia

    Graduate Student
    May 22, 2013 | 01:35 p.m.

    Good luck fellow Tufts team!

  • Icon for: Whitney Crooks

    Whitney Crooks

    Lead Presenter
    May 22, 2013 | 01:45 p.m.

    Thank you! Good luck to you all too!

  • Icon for: Brian Drayton

    Brian Drayton

    Faculty: Project Co-PI
    May 23, 2013 | 07:48 a.m.

    I love the strategy of using social media to build a model of the sort of softworm intelligence that you want.
    I am curious what software you will use to implement that intelligence?

  • Icon for: Whitney Crooks

    Whitney Crooks

    Lead Presenter
    May 23, 2013 | 10:15 a.m.

    Hi Brian. Thanks for your question. We’re using Wordpress php templates for the website. On my side, I’m doing everything in LabView. My LabView code listens to the website to detect HTML form submissions and then alters the HTML code accordingly. In addition, I’m using BotSpeak, software for LabView that the Center for Engineering Education and Outreach at Tufts is developing, in order to communicate with the Arduino. When we collect enough data to implement the genetic algorithm, we will do that in LabView as well. Thanks for your interest and let me know if you have any more questions!

  • Further posting is closed as the event has ended.