Icon for: Josh de Leeuw

JOSH DE LEEUW

Indiana University
Years in Grad School: 3

Judges’ Queries and Presenter’s Replies

  • May 20, 2013 | 05:19 p.m.

    What do you think would happen in situations where collisions were harmful?

  • Icon for: Josh de Leeuw

    Josh de Leeuw

    Presenter
    May 21, 2013 | 11:30 a.m.

    This question presents a lot of opportunities for further investigation with this kind of model. In our system as it currently stands, we are modeling a situation in which there is no explicit harm caused by the collisions, but we could use the same system to investigate this question directly.

    From an evolutionary point of view, there could be interesting trade-offs between the complexity of social sensing and organization, the potential harm caused by collisions, and the benefits of organized behavior. We could model evolutionary hypotheses with this system using the approach outlined by (Long, 2007). This is complete speculation, but one could imagine an evolutionary trajectory where passive dynamics and the benefits of organized behavior put pressure on the behavior of individuals to exploit the passive dynamics, creating the kind of system we have modeled with our robots. Then the harm of passively organizing imposes pressure to evolve social mechanisms to accomplish the same organization without collisions. In this kind of a system, the harmfulness of collisions could be treated as a variable. A likely hypothesis is that if the harmfulness of the collisions outweigh the benefits of the organized behavior, the system will either find another way to generate organized behavior or will avoid it.

    Reference:
    Long, J.H. Jr. (2007). Biomimetic robotics: building autonomous, physical models to test biological hypotheses. Proceedings of the Institution of Mechanical Engineers, Part C, Journal of Mechanical Engineering Science. 221, 1193-1200.

  • May 21, 2013 | 01:07 p.m.

    Would you expect to find different patterns when the goal is common (as in your case) versus joint (that is, when individual rewards depend on group coordination)?

  • Icon for: Josh de Leeuw

    Josh de Leeuw

    Presenter
    May 21, 2013 | 03:19 p.m.

    The common goal functions as an attractor, bringing all of the agents together so that the passive dynamics of self-organization will cause them to align. Whether or not a joint goal would achieve similar outcomes probably would depend on what exactly the joint goal was and if it fulfilled the role of an attractor.

    In social models of this phenomenon (e.g. Reynolds 1987), the goals of individual agents are also like the “common goal” scenario we’ve modeled here. Usually each agent is attracted to other agents in the group, which depends on some ability to detect neighboring members of the group, but doesn’t require any explicit representation of a joint goal among members of the group. Put another way – the group can coordinate if every agent in the group has the goal “move towards other agents in the group” (a common goal), and it is not necessary for the collective group to have some notion of “make the group as condensed as possible” (a joint goal). Nevertheless, in an evolutionary sense, the individual reward does depend on the group coordination, much like a joint goal! Coordinated behavior has many benefits (and costs) for individuals in the group (Sumpter, 2010) but these depend on the ability of the group to successfully coordinate.

    References:

    Reynolds, C. W. (1987) Flocks, Herds, and Schools: A Distributed Behavioral Model, in Computer Graphics, 21(4) (SIGGRAPH ’87 Conference Proceedings) pages 25-34.

    Sumpter, D.J.T. (2010). Collective Animal Behavior. Princeton University Press.

  • May 21, 2013 | 04:25 p.m.

    After learning about your experiment, I have some questions: (1) Do you think the fact that the robots bump into each other affects the outcome? (2) Would you expect the same results in 3 dimensions (versus the 2-D experiment)? (3) How do you think fish do it?

  • Icon for: Josh de Leeuw

    Josh de Leeuw

    Presenter
    May 21, 2013 | 05:12 p.m.

    Thanks for the questions!

    (1) Yes, absolutely. The physical collisions between the robots is what causes them to align. There are nice formal models of this kind of interaction that apply to the system we created (see section 4.2.1 of Vicsek & Zafeiris, 2012). When we manipulate how goal-directed the robots are, we are affecting the probability of collisions. If the robots all share a goal, they congregate in the same area of the tank, producing lots of collisions that cause the robots to align.

    (2) We would expect the same outcome in three-dimensions. Simulations of social interactions work well in three-dimensions (e.g. Reynolds 1987), and despite the fact that our model achieves coordination without social interaction, the underlying mechanisms in both cases are quite similar. Social models often involve three elements, attraction to nearby group members, avoidance of extremely close group members, and alignment with other group members. Our model can be interpreted using the same ideas: attraction is produced by the common goal, and avoidance and alignment are generated by physical interactions.

    (3) There is empirical evidence that fish use social information to school (e.g. Partridge & Pitcher, 1980), and we certainly don’t want to claim that our model suggests that animals coordinate in an entirely asocial manner. Rather, we think that our results show that asocial mechanisms can produce similar results to social ones, and that it is therefore important to consider the role that asocial mechanisms play in generating coordinated behavior. There are certainly advantages to social mechanisms (no potentially harmful collisions, longer-range modification of behavior) that real animals would evolve to take advantage of.

    References:

    Partridge, B. L., & Pitcher, T. J. (1980). The sensory basis of fish schools: relative roles of lateral line and vision. Journal of Comparative Physiology,135(4), 315-325.

    Reynolds, C. W. (1987) Flocks, Herds, and Schools: A Distributed Behavioral Model, in Computer Graphics, 21(4) (SIGGRAPH ’87 Conference Proceedings) pages 25-34.

    Vicsek, T., & Zafeiris, A. (2012). Collective motion. Physics Reports.

  • Icon for: Mary Gauvain

    Mary Gauvain

    Judge
    May 21, 2013 | 06:37 p.m.

    Goal formation is an important part of the type of behavior you study. What do you hypothesize about this aspect of the behavior, that is, how are goals of natural organisms determined and does the type of goal matter in the behavior (social or nonsocial) that occurs?

  • Icon for: Josh de Leeuw

    Josh de Leeuw

    Presenter
    May 21, 2013 | 07:34 p.m.

    Our model suggests that when similar organisms share the same asocial goal, coordinated behavior can emerge with the right physical dynamics. There are several models that show similar results with social mechanisms/goals (e.g. Reynolds 1987). Therefore, it seems like whether the goal is asocial or social doesn’t particularly matter in terms of what kind of behavior can be created.

    One potential difference between asocial and social goals is that asocial goals are, by definition, not directly related to the coordination of the group. Our system models a behavior like foraging at a food patch, which is important for the fitness of the agent independently of the coordination that it creates. However, some social goals may be directly relevant to coordination. If an agent has a goal to be near conspecifics and travelling in the same direction as its neighbors, then coordination will result (Reynolds 1987). This has potentially important implications for understanding the evolution of these coordinated behaviors. Since coordinated behavior has benefits and costs for individuals in the group (Sumpter, 2010), goals that produce coordinated behavior could be under selection pressure depending on the relative cost and benefits of organized behavior and the individual goals that generate it. For asocial goals that generate coordination, the costs and benefits of the coordination need to be balanced with the outcome that the goal is actually trying to achieve (such as feeding at a food patch). However, social goals that are entirely related to the coordination of the group only contribute fitness to the agent to the extent that the coordinated behavior improves fitness.

    References:

    Reynolds, C. W. (1987) Flocks, Herds, and Schools: A Distributed Behavioral Model, in Computer Graphics, 21(4) (SIGGRAPH ’87 Conference Proceedings) pages 25-34.

    Sumpter, D.J.T. (2010). Collective Animal Behavior. Princeton University Press.

  • May 21, 2013 | 11:08 p.m.

    Interesting. I remember that Reynolds paper! So we know, however that birds and fish can sense each other and do react to each other, and that those abilities are central in flocking. The circular milling example is perhaps one of a small number of possible collective behaviors that can occur with no social information. What are other such scenarios and what differentiates these scenarios from the sorts that do require sensing?

  • Icon for: Josh de Leeuw

    Josh de Leeuw

    Presenter
    May 21, 2013 | 11:48 p.m.

    Great question. This is the kind of question that is easier to answer with a computational model than a physical embodied system since exploring a parameter space is much faster in simulation. Luckily, someone else has done this! Grossman, Aranson, and Ben Jacob (2008) built a computational model that closely mirrors the physical system we constructed. They showed that it can exhibit several kinds of collective motion patterns, including circular milling, but also swarm migration where an entire group moves together in a particular direction, and complex patterns that involve multiple semi-stable structures.

    A crucial factor in the model is density. When the group density is high enough, the agents self-organize. Our robots achieve a dense clustering by sharing a common goal which causes them to all try and occupt the same area of the tank. The model of Grossman et al. achieves the same outcome by simply have a dense group from the start, constrained by the boundaries of the simulated world. In their model, boundary shape changes the dynamics of the group, which suggests that we could achieve different patterns of organized behavior in our physical robots by programming in different asocial behaviors. For example, having all robots track a moving target would likely produce a swarm migration pattern.

    Social sensing could be thought of as a way to mitigate the density requirement. With social sensing, reaction to a neighbor can happen at a distance. This makes it possible to coordinate at much lower group densities, which is obviously very important for biological agents that do not want to collide with other members of their group.

    Reference:

    Grossman, D., Aranson, I.S., and Ben Jacob, E. (2008). Emergence of agent swarm migration and vortex formation through inelastic collisions. New Journal of Physics, 10.

  • May 22, 2013 | 10:03 p.m.

    Thanks Josh. Okay. So this seems highly intuitive to me. Why do we need simulations and autonomous robots to tell us that having a large number of individuals attempt the same thing in close proximity will result in unique group behavior (such as following the moving target?). How is this useful?

  • Icon for: Josh de Leeuw

    Josh de Leeuw

    Presenter
    May 22, 2013 | 10:18 p.m.

    It is obvious that programming the robots to swim towards the light will cause them to congregate around the light. That would be a very boring result! But the coordination that they exhibit, which hopefully you can see in the videos of them swimming, goes beyond simply following their programmed behavior. The robots exhibit very stable behavior as a group – meaning that they move as a cohesive unit – despite not being programmed to do this in any explicit way. With our quantitative analysis of this behavior, we can see that the coordination occurs well after the robots congregate around the light, so simply being programmed to swim towards the light is not enough to produce the coordination we observe. We think this is an interesting result on its own, in that it extends the predictions of the model I referenced above towards the biological context of autonomous goals. Additionally, now that we have established that coordinated behavior can occur as a result of common goals with this robotic system, we can now move onto questions about the origins of social behavior, which I’ve touched on a bit in my responses to the other judges’ questions.

  • Further posting is closed as the competition has ended.

Presentation Discussion
  • Small default profile

    Robert Bowers

    Guest
    May 20, 2013 | 02:39 p.m.

    This seems to be a reaction to a reaction, going back to a once-standard view that failed to acknowledge the large role of social factors in producing coordinated group action. Was the converse (i.e. this demonstration) ever in question? (You can get this in NetLogo, which is rather cheaper).

  • Icon for: Josh de Leeuw

    Josh de Leeuw

    Presenter
    May 20, 2013 | 03:42 p.m.

    Hi, Rob. It’s certainly true that there are many ways to get coordinated behavior through passive dynamics (another great example is Tarcai et al. (2011): http://iopscience.iop.org/1742-5468/2011/04/P04...). We like the embodied and embedded aspect of the robots because we can show that coordination is still achieved despite the noisy aspects of having real autonomous agents.

    I’m not sure if the idea that exclusively asocial factors can generate coordinated behavior was in doubt, but models that make this assumption seem to be relatively recent (see section 4.2.1 of Vicsek and Zafeiris (2009): http://arxiv.org/pdf/1010.5017v2.pdf), and physical demonstrations even more so.

  • Small default profile

    Robert Bowers

    Guest
    May 20, 2013 | 04:28 p.m.

    Thanks Josh.
    To clarify, my point was that the rarity of asocial factors in such models has largely to do with a focus on showing that social factors are sufficient.

    I too like the ‘real robots’ approach. But it is fitting to note that the constraints introduced by any real, physical robot is specific to its design. And how these constraints influence group action can be hard to disentangle. In contrast, in the software model (e.g. NetLogo), all constraints are explicit, and can be varied systematically to understand what combination of factors is producing the result. (I actually got circling in a NetLogo model once, and that analysis leads me to wonder if your results rely on robots with rather poor steering — not a quality of circling fish).

  • Icon for: Josh de Leeuw

    Josh de Leeuw

    Presenter
    May 20, 2013 | 05:18 p.m.

    Yes, this is certainly a point in favor of computational models as opposed to physical systems. We did not specifically test variations in steering ability or program – mostly because the behavior we were after emerged immediately in the system we built and we didn’t need to tinker with parameters to generate coordination. This could either speak to the robustness of the phenomenon or a stroke of luck on our part.

  • Small default profile

    John Nagle

    Guest
    May 21, 2013 | 12:25 p.m.

    This is “coordination” in direction around an attractor. Anything moving against the flow gets pushed into going with the flow. It’s the same phenomenon that drives tornadoes, and vortices around a drain.

    Calling this “social” is a bit of a stretch.

    See Craig Reynolds’ original “Boids” paper.

  • Icon for: Josh de Leeuw

    Josh de Leeuw

    Presenter
    May 21, 2013 | 12:27 p.m.

    Right! This is explicitly asocial behavior. We are contrasting this with models like Reynolds’ Boids which use social factors to achieve similar outcomes.

  • Small default profile

    Manny Drivas

    Guest
    May 23, 2013 | 01:33 a.m.

    Daddy’O, That was really cool! What made them follow a pattern?

  • Icon for: Josh de Leeuw

    Josh de Leeuw

    Presenter
    May 23, 2013 | 10:32 a.m.

    Hi Manny! Think of the robots like bumper cars. Imagine everyone driving the bumper cars are going around the track in the same direction. Now, imagine you are driving a bumper car, but you are trying to go around the track the opposite direction from everyone else. This will be really tough because you’ll keep running into people. But, if you go around the track in the same direction as everyone else, the ride will be smooth and even if you hit someone else it won’t be a jarring collision. Less fun, perhaps! The robots end up in the situation where everyone is going around the track in the same direction because if a few robots try to go the “wrong” direction, then the other robots bump them back into alignment.

  • Icon for: Brian Drayton

    Brian Drayton

    Faculty
    May 23, 2013 | 07:38 a.m.

    For a person of a certain age, like myself, this is inevitably reminiscent of Breitenberg’s Vehicles. But it also is reminiscent of the problem in organismal development of the relationship between biological and physical components of form. Stimulating!
    Where do you go from here?

  • Icon for: Josh de Leeuw

    Josh de Leeuw

    Presenter
    May 23, 2013 | 10:36 a.m.

    I take a comparison to Braitenberg vehicles as a high compliment. Thanks!

    You are right on track with the problem you’ve highlighted. We think our system can be used to explore questions about the origins of social behavior. We can set up artificial selection experiments with these robots. One implication of the finding is that you don’t need selection for grouping to get these asocial groups, since the mechanism that produces the grouping is useful in other contexts. But, once you have grouping like this, with the benefits and costs that are associated with it, then selection might favor social sensing as a means to make the grouping more efficient.

  • Icon for: Aaron Olsen

    Aaron Olsen

    Trainee
    May 23, 2013 | 12:14 p.m.

    Awesome video! Clear and concise explanation with amazing graphics to demonstrate the research!

  • Icon for: Josh de Leeuw

    Josh de Leeuw

    Presenter
    May 23, 2013 | 12:15 p.m.

    Thank you!

  • Small default profile

    Robert Bowers

    Guest
    May 23, 2013 | 02:41 p.m.

    Thanks for your earlier replies, Josh.
    Can you clarify the connection between your demonstration and your example of milling fish? The milling fish don’t seem to be approaching some common goal (as the light, in your model).

    My intuition (and observation) is that if things as agile as fish or birds were approaching a common goal they wouldn’t mill about it. Why should we think that your mechanism will work for more than tugboats and bumper cars?

  • Icon for: Josh de Leeuw

    Josh de Leeuw

    Presenter
    May 23, 2013 | 03:38 p.m.

    The milling fish example is related in the sense that the model produces similar behavior at a group level. I think it is highly unlikely that the underlying mechanisms are the same.

    I think the agility/maneuverability question could certainly be tested with a model. If it’s right, I’m not sure that it would limit the application of the model to tugboats and bumper cars. There might be other biological organisms that would fit the requirements found by a model of the behavior.

  • Small default profile

    Robert Bowers

    Guest
    May 23, 2013 | 03:51 p.m.

    Can you give us an example of a behaviour or organism that would fit your model?

  • Icon for: Josh de Leeuw

    Josh de Leeuw

    Presenter
    May 23, 2013 | 04:14 p.m.

    Some single-cell organisms seem to match this model pretty closely (Rappel et al., 1999): http://srnano.ucsd.edu/~rappel/pub/vortex.pdf

    Since the asocial mechanisms on its own is relatively limited in terms of the behaviors it can produce (based on the fact that the agents must be in close proximity) and has costs that animals might want to avoid (also based on the fact that the agents must be in close proximity), then it makes sense for social mechanisms to be favored. However, since the asocial mechanisms can create grouping without selection for grouping, it suggests an evolutionary hypothesis about the origins of social mechanisms. We could test a version of this hypothesis with this model, by allowing the robots to evolve social sensing.

  • Small default profile

    Robert Bowers

    Guest
    May 24, 2013 | 12:06 p.m.

    How would you test your evolutionary hypothesis with these robots? You say, by allowing them to evolve social sensing, but how would this be done?

  • Further posting is closed as the competition has ended.