-
-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
testing: use Hurl in CI to test Caddy against spec #6255
base: master
Are you sure you want to change the base?
Conversation
Test Results6 tests 6 ✅ 2s ⏱️ Results for commit c718744. ♻️ This comment has been updated with latest results. |
Ooo, I like where this is going! Will revisit this after 2.8. |
@mohammed90 this looks great! \o/ I'm only wondering one thing:
Since the expectations are basically "the HTTP protocol" (and related stuff), would Caddy actually be the right place to do this? RedBot's value here IMO is exactly that it already knows what requests need to be done and what assertions need to be done against them, converting that into Hurl-based specs seems like it would be a sizeable project with a sizeable deliverable. Would creating this "Hurl-based HTTP spec test suite" be better as a standalone project which Caddy (and others, hopefully) can take advantage of and, hopefully, maintain? |
Caddy can be the right place :) we're aiming to test Caddy conformance to the spec. I skimmed the REDbot repo, and it isn't too complex. Integrating REDbot itself into the CI pipeline might be more of a hassle to maintain. Translating the behavior into Hurl files makes the expectations easier to understand and poke.
Perhaps, but maintaining such project is beyond my capacity. I can't initiate and commit to it (my personal backlog is too long already). I may help if it's maintained by a group. |
Signed-off-by: Mohammed Al Sahaf <[email protected]>
7e174a0
to
6d4992e
Compare
Signed-off-by: Mohammed Al Sahaf <[email protected]>
b055ebd
to
53625a7
Compare
53625a7
to
01ae168
Compare
1c8a91e
to
05c2380
Compare
05c2380
to
18a15d8
Compare
d196843
to
64b4734
Compare
64b4734
to
ddc2ca3
Compare
755c510
to
f59151e
Compare
b7e90d4
to
0d9f7b3
Compare
0d9f7b3
to
e75fca0
Compare
I like where this is going. I'll have to give it a closer look soon :) |
Since #5704 was posted, we've been on-and-off brainstorming how to approach testing of a web server. We sorta agreed that declarative approach is desired but weren't aware of any tools that'd facilitate the declarative approach nor had a concrete plan. We just knew we need solid tests.
I have recently come across Hurl (https://github.com/Orange-OpenSource/hurl) and was curious if it meets our needs. It is declarative. It makes HTTP calls. It stands on shoulders of The Giant®, namely curl. The PoC presented in this branch seems to work. In fact, the PR #6249 is a fix for a bug found while building this PoC.
This PR is to discuss the approach and to collaboratively add the tests. The core idea is simple:
TODO:
For TODO number 2, code coverage is a helpful tool. There's a way to extract execution coverage of the hurl tests†, but I haven't found a neat way to present it on GitHub PRs/Actions.
Based on the work done to resolve #5849 and the existence of the project REDbot, we can translate those expectations and rules into Hurl files.
† Using this article as guide: Build Caddy with coverage instrumentation using
go build -cover
. Run Caddy using the commandGOCOVERDIR=./coverdir caddy run
, then run the Hurl tests. Stop Caddy with eithercaddy stop
orCtrl-C
. Rungo tool covdata textfmt -i=coverdir -o profile.txt
. Rungo tool cover -html profile.txt
. An HTML page is opened in the browser with each file annotated by color for whether it was executed or not.