-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cleanup module structure #219
Comments
AFAIK the repo is structured by technology only. Do you have alternative cleaner proposals ? |
I was thinking about changing the project structure, so that it won't be one big bag of everything taking ages and tons of disk space to build, but rather parent-pom and submodules being a projects on their own and just referring to it. On the other hand we might think about putting wildfly container when executed in the managed mode in a single location instead of spawning it for each and every module build and putting it to all the |
I understand your points, but I think that if we split by the submodules, its going to explode too many projects. Maybe aggregate them by layers? Web (Servlet, JSF, etc), Server (JPA, EJB, etc) and so on? Probably not the best solution. Agree on the Wildfly container single location. Don't know if you can do that, but worth investigating. Feel free to create an issue and assign it to me :) |
IIRC single container installation need to be implemented in Arquillian. @aslakknutsen ? |
The single install is just a Maven setting. The problem was I couldn't find any option in Maven that defined parent-root. It was always relative to target module. e.g. mvn clean install <-- could get all to target a single root Unless we make it some 'external file ref', e.g. java.io.tmpdir |
I was thinking about "external file ref" actually. This adds minimal setup effort (IDE mainly), but should give us a bit more resource-friendly approach. I think DeltaSpike folks did that @LightGuard? |
Any thoughts on that? Stuff is growing.... and failing constantly. We don't see any problems with it? Do we actually know which modules are healthy? |
No, I don't think so. It's the intention of tests to fail, since they are pointing out bugs in the servers. For instance I just added a JASPIC test that fails on GlassFish but succeeds on JBoss. The fact that the tests fails on GlassFish is not a problem with the test or an indicator of the health of a module, but it points out GlassFish doesn't implement the JASPIC spec correctly here. Of course sometimes failing tests ARE an issue, since for some tests JBoss needs to configured in standalone.xml, and when a new JBoss server is installer or migrated that configuration is now always done again and the tests depending on it fail. However, that too is not really an issue with the tests themselves, but an issue for the environment configuration.
In Eclipse Mars you can now have hierarchical projects. See e.g. http://tools.jboss.org/blog/2015-02-23-alpha1-for-mars.html This makes working with the modules much friendlier. We could additionally go for @radcortez suggestion of organizing per layer. |
I think it would be cool if we set up the sample in a way that people just need to checkout the sample that they need. I already have a few persons asking me for help because they were only interested in one particular sample and needed to checkout and mount in the IDE the full project. |
I have somewhat of the same issue. I work with several vendors to help improve their JASPIC implementation based on the test results, and often tell them to just comment out all modules except for jaspic and the common one. For that particular situation the checkout itself isn't the real problem, but waiting for all tests to have run before it finally starts doing the tests you're interested in is problematic. Maven has some provisions for running specific tests but they are not it either. You'd want to say: build just the jaspic module run its tests. |
I do see several problems with such an approach. First of all with having unstable builds we do not really know what is working where exactly. The structure is far from what I would personally consider fast feedback loop. This can easily lead to sneaking in regressions or simply bad code (I would also strongly advise some informal code reviews for any PR). At the end of the day build is simply useless. Another problem is how we should actually treat these "samples" - as a TCK or a library of simple, yet useful examples which can be used as a foundation in Java EE project. If we consider former we should think about having test categories as well as several builds running our "TCK" against different servers. If we think about latter having a project in such a condition does not advertise Java EE as a solid technology choice to build your app on. Just my personal feeling.
How well does it work? Any experience with "building workspace" issue?
How about |
I think the key thing is the sheer quantity of modules in the project. To me, there are too many and even someone just looking at it, way too many. I feel bad because last year at JavaOne I volunteered to fix the build issues for Arun, but can't due to how its structured. I would prefer a module per technology, and tests representing the functionality needed. |
This is exactly the probIem I see. I don't feel like it is really easy for people to contribute. Honestly even for me it's simply a counter productive setup. |
@bartoszmajsak what do you suggest then? |
I would love to brainstorm about possibilities to slice this repository and also about smarter builds. Modules are self-contained and they do not depend on each other. This opens up several possibilities. We can try to run them in parallel (but that might be a bit challenging with Arquillian and managed containers). However it would be great to achieve such a setup and improve Arquillian (if needed) to provide such a capability. It is very important for any reasonable sized project. Another thing which comes to my mind is a "smart build". We have all the needed information about which modules have been changed within give commit / PR. Therefore we can have some sort of hook either on the CI server level or even build tool (however with |
Well, in my opinion the build is not unstable at all. It's an unfortunate fact of life that not all functionality works on every server. That's why there are several profiles for different servers in the first place. If every server perfectly implemented all functionality then we only needed a single server profile. This project is simply not about a single business product which indeed has to be build with zero tests failing. In this case, if I'm not mistaken, JBoss is used to run the tests. This is just one server. The current version for instance doesn't support propagation of a principal from the web layer to an EJB bean. If you'd run the same test testing that aspect against WebLogic or GlassFish it would pass. This doesn't make "the build" unstable. It means JBoss doesn't support principal propagation. Of course, if a specific test fails on ALL supported servers, it warrants a closer look. A truly faulty test, say one that accidentally does a divide by zero in Java code will always fail, independent of which server you'd test it on. I agree that it's currently hard to distinguish between a bug in the server and a bug in the test. It's not entirely trivial to work around this. You could use annotations perhaps to indicate that some tests are "supposed" to fail for certain known servers, but don't forget that the number of failed tests is also an indication of how correct a specific server is. Without that, you'd probably want to create a report based on the number of annotations present for a given server, but who's going to code that one?
Well, you can start with reviewing some of the existing code I guess. There are a couple of areas that can be reviewed:
I'd love if you could take a third look at my JASPIC test assumptions. As said, they already got a second look, but a third one would never hurt, especially as some assumptions are quite subtle and are based on intricate spec text.
A good TCK(-like) can IMHO actually be both. I often looked at the Mojarra tests (not the official TCK, but the project specific tests) to get some details how a specific functionality is working. Same for CDI, where I do look at the actual TCK.
Arguably it's better to know upfront that something is not working than having the user see something fail while right before a deadline or worse in production. Hopefully it also motivates vendors to improve their product so less tests fail for their server, which surely improves Java EE. One way or the other, when something does not work it simply is what it is. Not telling the user about this will not make it work. And to counter this argument; most servers, even closed ones, have public bug trackers. I don't think we should hide bugs, either via this project or via public trackers, to make Java EE look better than it is.
Projects indeed nest ;) It's a visual thing, functionality should not change. Tested it with some hobby projects and not yet on Java EE samples, but I intend to try this soon.
Have to admit I never tried this, but let's see how it works, thanks! But suppose it does, then in combination with nested projects in Eclipse there are not so much problems with the structure, are there? |
I agree with @arjantijms about the build not being unstable. After all, one of the purposes of this projects was also to make sure that the test run on every serve and if not, make sure that the test is right and that the server is the problem. So kinda acting like a TCK. @bartoszmajsak why don't you do some work in your fork and then we can review it? |
There is substantial difference between broken / failing build and unstable one (at least if we stick to Jenkins definition). When build fails or ends due to some error (like we are facing now with Travis) we loose a chance to run other tests (so we don't know if changes we made are actually working unless we run them locally which pretty much contradicts with the idea of CI to some extent). In our current situation we have all builds failing and my fear was that we were slowly heading towards "broken windows theory" situation, where we just leave it like that without really caring about it. I totally understand that this repository is not a complete product which should have all tests passing. If TCK approach is what we have in mind it's obvious that not all the servers (even though I assume most of them are Java EE 7 certified to great extent, or reference implementation) will have builds always green. We should have at least some way of knowing immediately why the build is not good and on which server (maybe Travis hook can put such an information as the PR comment saying it's good on GF, TomEE and JBoss but not on WLS or sth). Also what is the criteria in terms of build health to accept a PR.
Sure I can do "some work", otherwise I wouldn't be here. But it would be good first to know exactly what work is desired and what we all agree on. From "merge vs rebase" story I am not really keen to use the only important resource I have (time) for something which will not be used afterwards. Therefore I wanted to hear other's opinion about the structure and if we really need changes in this area. As for the smarter builds - it is definitely something I want to work on in general, but it's not really stuff which is "Pull Requestable". I would be thrilled to hear others' experiences and ideas so I can get some inspiration for this work.
Sure, I will start reviewing our code base soon.
This we can automate if we agree on the conventions. We can also use editor config or proprietary IDE settings to share will all contributors.
I'm totally up for it. Shall we discuss it in the separated thread?
This should be also part of every PR review before merging to master. |
Maybe not everyone remembers but we still have this awesome piece http://javaee.support/ created by @aslakknutsen. IMHO this not only has some indication how we could separate the project but also provides an overview of failing tests. Unfortunately I don't remember exactly on which build this is based on. @aslakknutsen could you shed some light? |
I agree it's important to be able to separate between 'New Failing / Somethings wrong' and 'Expected to fail in this version'. As pointed out earlier, some of my initial thoughts by adding Arquillian tests to all the samples were to show 'what works where'. A fairly simple solution could be to expose some variable and a little helper class in the Maven profiles and use JUnit Assume to weed out Failures from Expected failures. e.g. <profile>
<id>arq_wildfly_9.0.0.CR1</id>
...
<properties>
<server_name>wildfly</server_name>
<server_version>9.0.0.CR1</server_version>
</properties>
</profile> @Test
public void testX() throws Exception {
Assume.assumeThat("Ref https://issues.jboss.org/browse/WFLY-4690", Samples.notRunning(WILDFLY).version("9.0.0.CR1"));
// do test stuff
} That should show it at least as 'success but ignored' in Test report. As @aheusingfeld points out we can add some extra oil to javaee.support to display the status differently based on the new Assume. e.g. List all 'working samples but failing for given server' as a 'what to expect' type of listing. @aheusingfeld Not sure when the last time I updated it, but it needs some updating to read the new cloudbess/travis test results. |
As we are growing in size having everything under one big module bag is not really the best idea. It's very unfriendly for IDEs (not only for Eclipse to be honest). Therefore I would like to open the discussion on how we should slice the repo. We already have main modules covering different aspects of JavaEE platform in the dedicated sub-folders so those would be good candidates for me. What are your thoughts?
The text was updated successfully, but these errors were encountered: