Skip to content
Snippets Groups Projects
Commit 79b235ef authored by m-guberina's avatar m-guberina
Browse files

wrote some ideas on how to model the cart

parent 8d078690
No related branches found
No related tags found
No related merge requests found
TODOS 0 → 100644
goal 1: starting points for student projects
---------------------------------------------
1. PUSHING GROUP: some multiprocessing to get camera outputs
available in the control loop
(process puts its result on some queue managed by robot_manager,
and this is popped in the control loop or something.
think how to do this and then make a skeleton for how to use this).
2. JENGA GROUP: create a starting point and example with pinocchio.casadi
optimal control (copy something from tutorial, and use the PD controller
to track the reference) -> done but not pinocchio.casadi examples
are not imported to this library -> hopefully they can manage themselves
goal 2: challenge
--------------------------------
mpc for path following
------------------------
1. just solve the ocp to move to a point
1.1. start ROS1 in docker, make it print out heron's urdf into a file,
pass that file to pinocchio to construct the robot
(does not have to be from the same program, idgaf, just get it to work asap)
1.2. formulate the OCP to just go to a point -> done on almost the right model
DEADLINE: 2024-11-04
1.3. prepare wrappers for HAL which will make it runnable on the real thing
2. include the cart in the model, formulate OCP with that to move to a point
2.1. there's some arm constraints in pink, look how it's done there
--> noup
2.2. add a planar joint to one of the arms. it's a big box and that approximates
the cart reasonably well. alternatively you can define it as an individual thing,
and then attach the gripper to that. in any event, it's obviously a planar joint,
and you also have tools to create the (kinematic) model attachment.
to get the second arm to comply with this, you can create (and update i suppose)
a rigidBodyConstraint or some kind of contact shenanigans to fix
it to the free part of the handlebar.
this opens the question of how do you transmit the forces to the cart.
an alternative option is to have a separate model for the cart,
fix contacts for the robot "in the air", compute/extract the contact
force, do J_cart^T to map that to the cart joint frame,
and then do rnea with this to get the acceleration. putting some visous
friction there should be a relatively easy task.
do note that this complicates life significantly in terms of ACCURATELY
calculating all the derivatives in the differentialactionmodel you have to make
for this system.
in any event, it makes sense to just attach the cart as planar joint
to the tip of the end-effector. this is what you'll go with on heron,
so even if the dual-arm case is different you won't do double work.
when adding a planar object to the gripper, you know you have it
constricted to the gripper's orientation, so probably maybe you can do it
the other way for this. if not you need to create the holonomic constraint
for being at the handlebar. flipping the cart in the roll (or pitch?) axis
certainly won't happen, but you do have to minimize these forces
or the yumi will just stop working.
IT IS ABSOLUTELY NECESSARY TO DISCUSS THIS WITH YIANNIS AND GRAZIANO
because they certainly know more about this.
if you have to dodge the mpc and just use pink or whatever
to get a QP-clik type thing that's fine.
certainly preferable to not having a working thing.
yet another alternative is to create contacts on both arms,
calc/extract wrenches and update them and the cart movement,
where the dynamic model is blind to the existence of the cart.
this gets rid of some nice properties, but it might be a decent hack.
2.3. you need to manually make underactuation work in crocoddyl
--> start with action model in python, just like the acrobot example
--> port that to cpp if it's too slow (almost certainly will be lmao)
3. formulate the path following ocp with robot-cart model, put it in a while loop --> that's your mpc.
do this on a testing fixed path
4. integrate 3. with actual received path from node running albin's planner
DEADLINE: 2024-11-08 -> push to 2024-11-15
5. run that on real heron, tune until it's not broken and ONLY up to that point.
6. make a function that implements the get-cart-to-station behaviour tree.
this is a control loop that select what control loop to run based on some
logic flags (smaller number = higher priority). simple select loop of the highest priority.
for every flag there is a process (node + topic) which determines whether to set it or unset it.
- PRIORITY 1: if obstacle flag, then run stopLoop (which is just make all velocities 0). flag on
only if there are people around.
- PRIORITY 2: if handle_bar flag, run loop to grasp the handle bar. if handle bar is grasped
(vishnu's vision sets this flag) then the flag is unset. if not grasped (gripper
status (maybe) + vision decides) then set the flag.
- PRIORITY 3: pull cart along prescribed path - runs MPC path-following loop. flag on until goal is reached
- PRIORITY 4: dock cart. flag on until cart is docked. i guess vision should check that but idk honestly.
if this flag is set down, the program is down - shut down.
run this on the real thing
DEADLINE: 2024-11-15
7. do all this but on mobile yumi in sim
DEADLINE: 2024-11-23
8. tune yumi solution
DEADLINE: 2024-11-30
goal 3: usability, verifiability
----------------------------------
1. write some tests just too see that:
a) various parameter combinations work
b) controllers converge in situations they should converge in
c) most basic unit tests on functions
d) preferably some of this is runnable on the real robot,
or at least sim with --start-from-current pose flag
e) a speedrun of files in examples on the real robot,
again just to verify everything works
2. it would be nice to have a function to conveniently define points, namely,
some way to programatically (or on keyboard input) save the joint and end-effector
positions while manually guiding the robot in freedrive or with compliance control.
this obviously won't generate the best possibly trajectories according to literally any
metric other than convenience of use.
--> hopefully outsorceable to students
3. implement a way to simulate interaction with the environment.
it will boil down to programatically putting in wrench readings.
it can be the just a step to put the wrench at 10N or whatever,
which is of course non-physical, but it will be good enough just to
verify that the code works. this is easier than putting the robot
in a simulator.
--> a better result for a similar amount of work is to run a simulation
in pybullet. you can find an example setup in minimal_crocoddyl_examples repo
for kuka iiwa, port as much as possible of that here
(probably just shove into ifs in robotmanager)
4. use python's logging instead of printing
5. put the robot in a simulator and use the simulator.
--> combine this with point 3, make life easier by solving both
6. think of some basic metrics to calculate along
trajectories (print out some info on the performance of point-to-point and traj-following
runs, including comparing to same simulate runs etc. plots should be
fine to start, but having running rms or something sounds like a good idea).
also make sure x-axis are labelled correctly (wall-clock time)
goal 4: more controllers
------------------------
1. object regrasping with crocoddyl
--> to be done in separate repo, planar_in_hand_manipulation,
but it should depend on everything here as much as possible
2. finish adding all ik algorithms
--> try integrating pink for QP inverse kinematics instead
of fixing your own code. it's literally the same API from QPsolvers,
it's just that this is better documented
3. casadi or crocoddyl optimal control with obstacle avoidance (manipulator itself + table/floor is enough)
--> outsource as much as possible to student project a, but ofc help w/ infrastructure
if necessary
4. [hard] adjusting the dmp to get back on the path despite external forces
(fix the problem of writing on a non-flat surface/whatever) --> publishable side-project
--> give it 2-3 days before robotweek
goal 5: panda/yumi
----------------
finally, do what you promised and put this on another robot,
thereby rendering this something publishable in joss
1. transfer the library to panda or yumi or both
--> better idea to transfer to dual kuka iiwa setup.
look how mim people are doing it, but also refer
to drake since they also run on iiwa
--> this is easier because it's not broken (panda),
there's no abb egm bullshit, and noone works on that robot
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment