Skip to content
Tarik Viehmann edited this page Dec 9, 2024 · 47 revisions

Lab Course Setup

This repository is for developing agents using the ros2 clips executive for the robocup logistics league.

The ros2-clips-executive can be found here: https://github.com/fawkesrobotics/ros2-clips-executive/tree/tviehmann/major-cleanup

Quick Reference

General setup for goal-reasoning with the ros2-clips-executive:

Preface for ROS2

We will use ros2 jazzy.

Key aspects when working with ROS 2:

  • projects utilize different packages
  • packages are organized in workspaces
  • colcon is used to build packages in a workspace
  • vcstool and repos files may be used to fetch packages from multiple sources to easily set up workspaces
  • ROS is tightly scoped, workspaces need to be sourced for them to be available in your current environment (here, environment typically refers to your current terminal)

Getting familiar with ROS 2 base installation

ROS 2 offers several meta packages that can provide you with the core features needed in every ROS environment.

On Ubuntu machines the base installation is located under /opt/ros/jazzy, on our Fedora-based lab machines the base installation is located at /usr/lib64/ros2-jazzy as this matches packaging guidelines for fedora. The ROS packages on fedora come from here.

To source your base installation, simply run:

source /usr/lib64/ros2-jazzy/setup.bash

This will give you access in your current terminal to basic ros2 features, such as the ros2 command line interface ros2cli.

ros2 --help           # check out what the cli offers
ros2 pkg list         # example to list all packages known in your environment
ament_index packages  # another useful tool to query the ament_index directly, which is enabling all this scoping magic

Task 1

Get familiar with the basics of ROS 2 by doing the basic CLI tutorials

Notes:

  1. You do not have to (and in fact, can not install anything on the system, meaning that you should ignore all commands asking you to install packages via apt as those packages should already be available for you and are installed via dnf by the system administrator.
  2. Remember, sourcing the base installation is different compared to the description in the tutorials, as described above!

Setup ros2-clips-executive

Now that you are familiar with the basics of ROS, ime to setup our infrastructure with CLIPS:

We will setup the our project using 3 different workspaces as follows follows:

ros2/
 deps_clips_executive_ws # for dependencies that we do not need to update
 clips_executive_ws
 labcegor_ws

The idea is to have one workspace for dependencies that are not relevant.

Firstly, create a directory structure for ros2 workspaces

mkdir -p ~/ros2/{clips_executive_ws,deps_clips_executive_ws/labcegor_ws}/src

Then, get the ros2-clips-executive by following the build steps.

Note: Make sure to always be on the correct branch, which for now is tviehmann/major-cleanup.

Task 2

Get Familiar with the CLIPS-Executive by reading through the following readmes:

  1. Main repository
  2. CLIPS Environment Manager
  3. File Load Plugin
  4. Executive Plugin
  5. Ros Msgs Plugin
  6. cx_bringup

Task 3

Utilizing the RosMsgs, FileLoad and the Executive plugin, try to control the Turtlebot by publishing to the topics /turtle1/cmd_vel and subscribing to the topic /turtle1/pose. The turtle should move in a loop between bottom-left (1.0 , 1.0) bottom-right (1.0 , 9.0) top-right (9.0 , 9.0) and top-left (9.0 , 1.0).

3.1 Workspace Setup

You can use this repository to start the task and you should perform a few basic steps that will help you on your mission:

  1. Create a new workspace (e.g., ~/ros2/labcegor_ws)
  2. inside the src directory of the workspace, clone this repository and create a new package, e.g., via:
 ros2 pkg create --build-type ament_cmake --license Apache-2.0 labcegor_bringup
  1. Inside the package, you need params and launch and clips directories that also need to be installed in the respective CMakeLists.txt via
install(DIRECTORY launch params DESTINATION share/${PROJECT_NAME})
install(DIRECTORY clips/ DESTINATION share/${PROJECT_NAME}/clips/${PROJECT_NAME}/)
  1. A simple launch file adapted from cx_bringup will probably suffice for you:
import os

from ament_index_python.packages import get_package_share_directory

from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument, SetEnvironmentVariable
from launch.actions import OpaqueFunction
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node


def launch_with_context(context, *args, **kwargs):
    labcegor_dir = get_package_share_directory('labcegor_bringup')
    manager_config = LaunchConfiguration("manager_config")
    log_level = LaunchConfiguration('log_level')
    manager_config_file = os.path.join(labcegor_dir, "params", manager_config.perform(context))
    # re-issue warning as it is not colored otherwise ...
    if not os.path.isfile(manager_config_file):
        logger = get_logger("cx_bringup_launch")
        logger.warning(f"Parameter file path is not a file: {manager_config_file}")


    cx_node = Node(
        package='cx_bringup',
        executable='cx_node',
        output='screen',
        emulate_tty=True,
        parameters=[
            manager_config_file,
        ],
        arguments=['--ros-args', '--log-level', log_level]
    )
    return [cx_node]

def generate_launch_description():
    declare_log_level_ = DeclareLaunchArgument(
        "log_level",
        default_value='info',
        description="Logging level for cx_node executable",
    )
    declare_manager_config = DeclareLaunchArgument(
        "manager_config",
        default_value="clips_env_manager.yaml",
        description="Name of the CLIPS environment manager configuration",
    )

    # The lauchdescription to populate with defined CMDS
    ld = LaunchDescription()

    ld.add_action(declare_log_level_)
    ld.add_action(declare_manager_config)
    ld.add_action(OpaqueFunction(function=launch_with_context))
   
    return ld
  1. Write a simple CLIPS manager config (you can orient yourself on the config for the ros msgs plugin example from the cx_bringup package.

3.2 TurtleSim Control

We use the standard turtlesim simulation launched via:

ros2 run turtlesim turtlesim_node

Inspect the topics /turtle1/cmd_vel and /turtle1/pose using the ros2 cli tool to find out what message types are used and get familiar how to steer the turtle from the command line.

3.3 CLIPS Setup

Encode the task at hand in CLIPS with the help of the CLIPS basic programming guide (bpg) and the documentation for the RosMsgsPlugin (you can of course also look at the provided usage example from the cx_bringup package).

  1. Use deftemplate constructs to encode the task at hand. (Chapter 3 in bpg)
  2. Use the deffacts construct to initialize your knowledge about the task. (Chapter 4 in bpg)
  3. Write rules to interface with the ROS topics and steer the turtle. (Chapter 5.4 up to 5.4.9 in bpg)
  4. If needed, use deffunctions to write some functions. (Chapter 7 in bpg)

In addition, Chapter 12 (in particular up to 12.14) of the bpg serves as a reference for CLIPS functions available in every environment.

Task 4

Now that we got the first hang on CLIPS and it`s interfaces to the outside world, we can start with the RoboCup Logistics League domain. The goal of this task is to buffer a cap at a cap station. More precisely:

  1. drive with the first robot to the Cap Station 1 of team Magenta (M-CS1)
  2. pick up a cap carrier from the shelf
  3. place it on the input of the machine
  4. instruct the machine to buffer a cap (RETRIEVE_CAP
  5. drive with the second robot to the output of M-CS1
  6. pick up the product with the second robot

In order to work on this task, make sure to read through the following subsections that will help you get started.

1. RCLL Software Preliminaries

The full rulebook of the league can be found here (pdf) for reference.

The required software is bundled via containers with setup files in the rcll-get-started repository.

cd ~/
git clone -b tviehmann/lab-setup https://github.com/robocup-logistics/rcll-get-started.git

In order to use containers on our lab machines, we additionally add a local config ~/.config/containers/storage.conf:

# This file is is the configuration file for all tools
# that use the containers/storage library.
# See man 5 containers-storage.conf for more information
# The "container storage" table contains all of the server options.
[storage]
# Default Storage Driver
driver = "overlay"

# Primary Read/Write location of container storage
graphroot = "/var/tmp/$USER/container/storage/"

# Storage path for rootless users
#
rootless_storage_path = "/home/$USER/.containers/storage"

Additionally, make sure your user has a subgid and subuid range. If this is the case, the following command will return a non-empty output:

grep $USER /etc/subgid
grep $USER /etc/subuid

2. Starting a Game

In essence, just sourcing the contained setup.sh file of the rcll-get-started repository should provide you with a bunch of new terminal commands, all prefixed by rc_ (press tab twice after typing the prefix to see a full list of commands).

cd ~/rcll-get-started
source setup.sh

We mainly need the two commands

  1. rc_start which starts everything
  2. rc_stop which stops everything

In order to verify that everything works as expected, you can query the state of containers using podman. The main useful commands are:

podman ps # shows list of containers
podman pod ps # shows list of pods
podman rm <c-id> # removes a container
podman pod rm <p-id> # removes a pod

The workflow for starting a game is the following:

  1. Run rc_start
  2. If everything is running correctly, you should be able to open a browser and go to localhost:8080 to see the refbox frontend.
  3. Pressing ctrl + alt + o lets you connect as referee.
  4. Press the play button in the top middle to go to Setup phase. This generates a new game instance.
  5. You should see that the simulated robots are now connected.
  6. Switch to Production phase by clicking on the phase on top.

3. Controlling a Robot via Protobuf

All communication in the RCLL is done by exchanging messages via broadcast peers that transmit protobuf messages. The cx_protobuf_plugin of the CLIPS-Executive lets you interface with protobuf from within CLIPS.

There is already a useful repository containing message definitions at https://github.com/carologistics/rcll-protobuf which you can clone an build in your workspace to obtain all message definitions that you need. A suitable plugin config is depicted below.

    protobuf:
      plugin: "cx::ProtobufPlugin"
      pkg_share_dirs: ["rcll_protobuf_msgs"]
      proto_paths: ["rcll-protobuf-msgs"]

Basic information about the refbox is described in the wiki Check out the following articles:

  1. Concepts and Terminologies
  2. The General section of the Communication Protocol section.
  3. Machine States

We will use protobuf for 3 things:

  1. Command the robots via messages defined in AgentTask.proto
  2. Instruct machines via messages in MachineInstructions.proto
  3. Observe the information sent by the refbox

The ports used to communicate with the refbox are defined in rcll-get-started/config/refbox/comm/default_comm.yaml.

The ports used to communicate with the simulator are defined in rcll-get-started/simulator/config.yaml.

The main branch of this repository contains a code skeleton that you can use as a starting point.

Cyclone DDS Config

In order to not multicast to other lab machines, we will need to setup Cyclone-DDS. Open a new file in the editor of your choice:

gedit ~/cyclone_dds.xml

Paste this in:

<?xml version="1.0" encoding="UTF-8" ?>
<?xml-model href="https://raw.githubusercontent.com/eclipse-cyclonedds/cyclonedds/master/etc/cyclonedds.xsd" schematypens="http://www.w3.org/2001/XMLSchema" ?>
<CycloneDDS xmlns="https://cdds.io/config">
<Domain Id= "any">
	<General>
		<Interfaces>
			<NetworkInterface address="127.0.0.1"/> 
		</Interfaces>
	<AllowMulticast>true</AllowMulticast>
	<EnableMulticastLoopback>true</EnableMulticastLoopback>
	</General>
</Domain> 
</CycloneDDS>

Lastly, register Cyclone-DDS as your ROS middleware (replace <YOUR-USER-NAME> by your user name) by putting these lines in your terminal config (.bashrc):

gedit ~/.bashrc
  export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
  export CYCLONEDDS_URI=file:///home/<YOUR-USER-NAME>/cyclone_dds.xml

Notes:

  1. This In order for these changes to take effect in existing terminals (and terminal tabs), they need to reload the .bashrc again:
source ~/.bashrc
  1. It might require to stop the ros2 daemon once for the changes to take effect:
ros2 daemon stop

Colcon Symlinks and Default Configuration

colcon accepts a range of arguments. One particularly handy feature is the ability to build using sym-links:

colcon build --symlink-install

This prevents files from being copied into the installation directory and instead just creates symbolic links. The benefits of this are that it costs less memory and that all changes to files in the source directory are also applied to the installed packages without a need for building the workspace again (this does not apply for files that are processed at build-time, such as C++ files that are compiled into binaries). E.g., changes to existing python launch files, yaml configuration or CLIPS files are directly available in the installed package.

Limitations:

  1. Symbolic links are overridden by file copies again, if colcon build is called without the --symlink-install argument later.
  2. Symbolic links can not override file copies, if colcon build --symlink-install is called after a regular colcon build. To achieve the desired result in this scenario, just delete the build and install directory of your workspace and run colcon build --symlink-install again.
  3. New files cannot magically appear in the installation directory. Hence, if you create a new file, make sure to run colcon build --symlink-install again.

It is also possible to define default arguments for colcon.

Just create the following file in the .colcon directory of your home directory: $HOME/.colcon/defaults.yaml. An example configuration, also including some useful cmake args, is shown below:

build:
  cmake-args:
    - -DBUILD_TESTING=OFF
    - -DCMAKE_VERBOSE_MAKEFILE=ON
    - -DCMAKE_BUILD_TYPE=Debug
    - -DBUILD_TESTING=OFF 
  symlink-install: true

In order for colcon to accept the default configuration, the COLCON_HOME variable needs to point to the .colcon directory location.

export COLCON_HOME=$HOME/.colcon/

(You might want to add this to your .bashrc to automatically set the value in every terminal)