-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limelight AprilTags + Automation Actions #3
Comments
As for now, i wrote actions AlignToTarget and MoveToTarget using the limelight. The pigeon was unplaged as for the last meeting, so once we replug it, those actions should work. |
|
In the past week, i've been trying to estimate the distance between the limelight and the apriltag, but without a big breakthrough. After a while i was straggling with getting the exact values so my calculation would work, i changed the method and used area instead of height. For some reason this method didn't work as well as i hoped for- and in this way i had to find a focal length, which was less efficient, and didn't use services limelight's already providing for us. Now i started using 3D position, that will give me a vectors in space between the target and the camera. This way is way more efficient and will give me with the right values an acurate distance. Using this function, i'll be getting the X,Y and Z vector between the camera and target (For example, for the X_vec -> cameraPose3dTargetSpace.getX() ). I havn't tried this way yet, so i'll update when i will on the progress. |
Hi Noam, |
I haven't written the measurments, next time i'll be working on the robot, i'll check it and get back to you |
No need for measuring using area manually, the apriltag library provided better and more precise calculations. So the limelight provides us with complete info on it's own, including positioning. See the API for limelight: https://docs.limelightvision.io/docs/docs-limelight/apis/complete-networktables-api#apriltag-and-3d-data |
@NoamZW no need for auto actions for limelight motion, we'll use path planning and such to do so instead. |
Learn how to use the Limelight AprilTag pipelines: configuring, and reading data from the camera. Build testing
Actions
to drive the Swerve based on info from the Limelight. More specifically, align to target, move to target and general identification of the target or multiple targets.The text was updated successfully, but these errors were encountered: