Replies: 2 comments 2 replies
-
Hi @pevogam , nice that you found openQA. Thank you for your article. It is very interesting to see Guibot. It is great to see other approaches. openQA of course also uses openCV as well as Tesseract OCR for the OCR backend although the latter is not very prominent, likely due to the normal screen matching approach being easy enough to maintain :) A big benefit of openQA for example for openSUSE distribution testing was being completely independant of the operating system on a machine that we want to run the tests on. Is Guibot running within the same operating system that runs the applications you want to test or can that be independant? |
Beta Was this translation helpful? Give feedback.
-
openQA and its backend os-autoinst serve multiple purposes:
I assume Guibot would only help with point 3. Most of the functions of the test API (3.) can be ignored if not wanted. For instance kernel tests usually rely on upstream testsuites. The test results can be imported if they use a common format like JUnit. Other test tools could be used in the same way. So instead of doing the clicking via openQA's API one might just launch a Guibot test instead. In that sense it could already be used right now. I'd like to note that the graphics processing hasn't seen much changes lately except for migrating to newer OpenCV versions. So we're most likely lacking behind Guibot a lot. That means we could indeed benefit from integrating the tool. Like I said, it could theoretically already be used like other external testsuites. I'm saying "theoretically" here because it leads to lots of questions, especially how well it will integrate with the existing features of openQA's web UI (4.). |
Beta Was this translation helpful? Give feedback.
-
Hi OpenQA community,
This is just if it gets someone here interested but perhaps you would find the Guibot project useful as a potential backend for the GUI assertions and operations. Let me elaborate.
We at Intra2net AG have developed a lot of GUI tests for our products and various packages like the Horde groupware, AD/LDAP, and more. More often than now we needed to simulate a real user to operate on our systems (encapsulated for testing as virtual machines), looking at the VNC screen and performing assertions about certain subregions of the screen. I heard about your project and since I am using OpenSUSE on my home desktop I was surprised to see that you do a lot of this validation too. Clearly, I am not fully aware of all capabilities of your GUI operator but I would still like to recommend Guibot because of its rich choice of CV (computer vision) backends - from template and feature matching to OCR and DL - and its rich choice of desktop controller backends like VNCDoTool for VNC screens, XDoTool, AutoPy, PyAutoGUI, etc. The CV backends are especially important since through 7 years of GUI test development we have had plenty of corner cases where we need more advanced forms of screen region matching. The OCR backends (we have three options) are also helpful in recent tests where we want to read out text from the screen and convert it back to string for some more elaborate (e.g. regex or else) assertions.
If you want to know more just let me know and I hope we can be of your assistance,
Plamen
Beta Was this translation helpful? Give feedback.
All reactions