-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Image Zoom #10
Comments
Recognizing and forwarding gestures is definitely in the Navigator's realm, since the implementation is tightly coupled with the internal views. So I would be in favor of adding double tap and pinch in/out. Long tap, pans and swipes could be interesting but might enter in conflict with default behaviors such as selection and pagination. Pinches in particular could be used by a reading app to increase/decrease the font size in a reflowable document, when not over an image. To keep this generic and flexible, these gestures should be recognized over any element. For the double tap, like you mentioned, this will add a small delay for single taps. Personally I don't think this is such a big issue because we already have such delays in fixed layouts and PDF to zoom in the page. But maybe we could let the app opt-in on the gestures it's interested in, in case they don't want the delay incurred by double tap. When forwarding the gesture to the reading app, besides the gesture point, we should also provide some additional context. For example something like that: /// Metadata about an element interacted with a gesture.
struct GestureTarget {
/// Frame of the element relative to the navigator's view.
let frame: CGRect
/// Content of the element interacted with.
let content: Content?
/// Raw content of the element interacted with.
///
/// Can be used for advanced cases, for example it is the outer HTML of the element
/// in an HTML resource.
let rawContent: RawContent?
/// Content extracted from an element.
enum Content {
/// Text element.
/// If possible, provide the `word` that was touched by the gesture, in
/// the element.
case text(String, word: String?)
/// Media element, e.g. a bitmap, SVG or audio file.
/// Retrieve the `Link` from the `Publication`, since it might contain
/// `alternate` in higher resolutions.
case media(Link)
}
/// Raw content extracted from an element.
struct RawContent {
/// Media type which can be used to determine the format of the data.
type: String
/// String representation of the raw data.
/// A bytes array should be base 64 encoded.
data: String
}
} Do you see any other kind of content I forgot? A few implementation notes:
To keep things simple, initially I would just recognize it on any elements. If we see that for performance or other reason we need something like that, we can extend the API. |
@mickael-menu thanks for chiming in and the thoughts! This has shifted a little lower on my priorities, but when I go to implement this I'll revisit and reference this issue. |
I'm moving this to |
Is this still planned ? |
Not for this year, it's in the icebox at the moment. |
Let's say that I want to add a feature where double-tapping an image in a book displays it in a native UI fullscreen. Or even better, pinching on the image turns it into a native image that I can stretch, rotate and snaps to fullscreen, like in the Photos app.
(I have done this with another app, many years ago.)
I'm wondering what the best way would be to go about implementing at least the double-tap, and ideally in a way that can be integrated into Readium, if it's something that should be in Navigator or the App or some part of the codebase.
Purely from an implementation perspective, here are some notes:
<img>
tags and only do the timer in that case; there's no reason to give all tap events a delay.I'm tempted to do a bunch of this work in JS and then wire up a new delegate method on the Swift side, but I don't think such a thing would be merged. The double-click handler could probably be implemented on the Swift side, but what about pinch-to-zoom-out? I don't think that's going across the default tap bridge. Also, should it be customizable what elements this works for? Etc.
Anyway, I will probably hack something together for now but I wanted to discuss what the optimal route is on this. Some of it might be Swift delegate code, and some might be injected JS. I know R2 is intended to be flexible enough not be forked, and I'm not sure how much of this should be supported natively and/or what the strategy would be for enabling this kind of functionality for app devs.
The text was updated successfully, but these errors were encountered: