Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tokenizers tokenizer #1261

Merged
merged 6 commits into from
Nov 5, 2024
Merged

Conversation

gabe-l-hart
Copy link
Contributor

@gabe-l-hart gabe-l-hart commented Oct 3, 2024

Dependencies

This PR is part of a sequence in support of adding Granite Code. It depends on merging the following PRs:

Issues

Closes #1251

Description

This PR adds partial support for models that use the tokenizers (as opposed to tiktoken or sentencepiece) for tokenization. This PR only addresses support in the python runner, and it does so by creating a new class in the tokenizer module that simply wraps tokenizers.

Discussion

I'm not sure this is the correct direction to go for solving this since the tokenizers library is not (to the best of my knowledge) portable to the various export formats (yet). There are two main challenges to extending more tokenizer support outside of simply wrapping tokenizers:

Pre-tokenizers

For may tokenizers, multiple regexes are used in sequence to split the raw string. Not being a regex expert myself, it's not immediately clear to me if it's possible to merge this kind of multi-pass splitting into a single regex. For other tokenizers, a single regex is used, but it is a different expression than any of those currently implemented in tiktoken.

From my investigation, I think there are a few candidate paths forward:

  1. Provide a c++ implementation of the various tokenization routines from tokenizers in a separate implementation of the Tokenizer class.
  2. Extend the existing c++ TikToken class to support multiple regexes in the pre-tokenizer
    • This would also correspond with needing to make the set of patterns configurable and therefore serialized into the tokenizer.model artifact, or somehow making these tokenizer arguments an argument at instantiation time.

NOTE: The corresponding tokenization in llama.cpp lives here. This code is a full implementation of a unified tokenizer with configuration to dispatch between known patterns and optimized implementations. The config for the model that indicates which tokenizer to use is stored in the model's GGUF file directly, so at load time, the correct tokenizer is found based on that value.

Special Tokens

Even for models that use a single regex (and even the llama regex), models may use different special tokens for special functionality (chat template, FIM, tool calling, other custom prompting). Since the tokenizer.model, only the vocab is stored, so there is not currently any way to note the special tokens in serialization (similar to the need for configuration of pre-tokenizers).

Copy link

pytorch-bot bot commented Oct 3, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1261

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 4a20f69 with merge base f20f5e7 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Oct 3, 2024
@gabe-l-hart gabe-l-hart force-pushed the TokenizersTokenizer-1251 branch 7 times, most recently from f2cba4c to 3554c3e Compare October 9, 2024 23:52
@gabe-l-hart gabe-l-hart marked this pull request as ready for review October 10, 2024 16:07
@gabe-l-hart
Copy link
Contributor Author

@Jack-Khuu This PR is now the tip of the chain. I've opened it up to review, but I suspect this one will need a lot more discussion than the others. As an FYI, I'm working on a c++ implementation that would support tokenizers tokenizers (branch), but it's slow going with other competing priorities.

@gabe-l-hart
Copy link
Contributor Author

gabe-l-hart commented Oct 10, 2024

Moving conversation on the various open questions here.

I think I've just discovered part of why converting from tokenizers to tiktoken format (e.g. with my script) is not straightforward.

One of the main differences between the tokenizer.model format and the tokenizer.json, besides the presence of a bunch of metadata, is that the vocab and merges are held separately in tokenizer.json whereas the merge ranks are explicitly expected to match the IDs in tokenizer.model. This comment seems to indicate that this is one way that the vocab can be constructed, but that it is not a required part of the BPE algorithm. This would indicate that tiktoken -> tokenizers should work fine, but tokenizers -> tiktoken will be much harder because there's no guarantee that this assumption about ranks will be met in an arbitrary vocab/merges in a tokenizers model.

UPDATE: Further digging shows this might still be ok for standard cases. For Granite Code at least, the ordering of the tokens in the merges strictly matches the "correct" rank and always has a value offset of 261. After a bunch of digging, I think I've convinced myself that the numeric value of the rank is not critical since its usage is entirely around a priority queue when performing merges. As such, having the ordering match should produce the same results.

@Jack-Khuu
Copy link
Contributor

Pardon the delay: I've been OOO (still am)
Will take a look when I get back to office

Thanks again!!

@gabe-l-hart
Copy link
Contributor Author

Not a problem at all, I've been distracted on other threads too. I have some partial work towards a native c++ implementation that supports multiple pre-tokenizer regexes and custom special tokens. At the same time, one of those distracting threads has had me looking more closely at sentencepiece and it's possible we could go the route of converting from tokenizers -> sentencepiece and avoid the need for a full c++ implementation. I'll update as I get more clarity.

@Jack-Khuu
Copy link
Contributor

Thanks again @gabe-l-hart, feel free to loop me into the other threads (HF?) if you think it'll help

Copy link
Contributor

@Jack-Khuu Jack-Khuu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of scope of this PR (i.e. we'll fix afterwards), but we should probably move toward using an enum for the tokenizer to save us some headache

@@ -23,6 +23,8 @@
import tiktoken
from tiktoken.load import load_tiktoken_bpe

from .base import TokenizerBase
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason not to use the full path?

Suggested change
from .base import TokenizerBase
from tokenizer.base import TokenizerBase

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heh, no, I have tended towards relative imports for local (the mental equivalent of #inlclude "foo.h" vs #include <string> for local files vs standard/third party). Definitely no strong preference though! I'd much rather stay consistent with the rest of the project.

@@ -193,6 +193,7 @@ class TokenizerArgs:
tokenizer_path: Optional[Union[Path, str]] = None
is_sentencepiece: bool = False
is_tiktoken: bool = False
is_tokenizers: bool = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since tokenizers as a general term is overloaded

Suggested change
is_tokenizers: bool = False
is_hf_tokenizers: bool = False

from .base import TokenizerBase


class TokenizersTokenizer(TokenizerBase):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
class TokenizersTokenizer(TokenizerBase):
class HFTokenizer(TokenizerBase):

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto with the file name

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, I like that. I was struggling with the generic name and things like TokenizersTokenizer just sound bad. I'll rename the file in a separate commit since I can't stage that as a suggestion.

@@ -229,16 +244,27 @@ def validate_model(
if model is None:
return

if self.is_tiktoken == self.is_sentencepiece:
if len(list(filter(lambda x: x, [self.is_tiktoken, self.is_tokenizers, self.is_sentencepiece]))) != 1:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if len(list(filter(lambda x: x, [self.is_tiktoken, self.is_tokenizers, self.is_sentencepiece]))) != 1:
if sum([self.is_tiktoken, self.is_tokenizers, self.is_sentencepiece]) != 1:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, that's way simpler!

use_tiktoken: bool = False
use_tokenizers: bool = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
use_tokenizers: bool = False
use_hf_tokenizers: bool = False

@@ -329,12 +331,14 @@ class ModelArgs:
model_type: ModelType
transformer_args: Dict[str, Dict[str, Any]]
use_tiktoken: bool
use_tokenizers: bool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
use_tokenizers: bool
use_hf_tokenizers: bool

@Jack-Khuu
Copy link
Contributor

Mini Update: We have some engineers internally who may be interested on helping on C++ front if you get stuck btw

@gabe-l-hart
Copy link
Contributor Author

That's great! I'm finally getting back to this. Will push updates for your suggestions and will push a branch with the very WIP c++ stuff.

Copy link
Contributor Author

@gabe-l-hart gabe-l-hart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just pushed changes with all the renames. Thanks for the suggestion!

@@ -23,6 +23,8 @@
import tiktoken
from tiktoken.load import load_tiktoken_bpe

from .base import TokenizerBase
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heh, no, I have tended towards relative imports for local (the mental equivalent of #inlclude "foo.h" vs #include <string> for local files vs standard/third party). Definitely no strong preference though! I'd much rather stay consistent with the rest of the project.

from .base import TokenizerBase


class TokenizersTokenizer(TokenizerBase):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, I like that. I was struggling with the generic name and things like TokenizersTokenizer just sound bad. I'll rename the file in a separate commit since I can't stage that as a suggestion.

@@ -229,16 +244,27 @@ def validate_model(
if model is None:
return

if self.is_tiktoken == self.is_sentencepiece:
if len(list(filter(lambda x: x, [self.is_tiktoken, self.is_tokenizers, self.is_sentencepiece]))) != 1:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, that's way simpler!

@gabe-l-hart
Copy link
Contributor Author

Oops, missed one place to change the name. Should be fixed now.

@gabe-l-hart
Copy link
Contributor Author

very WIP on the c++ implementation is on a branch now: https://github.com/gabe-l-hart/torchchat/tree/TokenizersCpp-1251

…support

Branch: GraniteCodeSupport

Signed-off-by: Gabe Goodhart <[email protected]>
…tokenizers

This allows for all HF tokenizers to be supported in the python layer. It
will need significant work to offer similar compatibility at the c++ layer.

Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteCodeSupport

Signed-off-by: Gabe Goodhart <[email protected]>
…kenizer

Branch: GraniteCodeSupport

Signed-off-by: Gabe Goodhart <[email protected]>
pytorch#1251
Branch: TokenizersTokenizer-1251

Co-Authored-By: [email protected]
Signed-off-by: Gabe Goodhart <[email protected]>
@Jack-Khuu Jack-Khuu merged commit 9480258 into pytorch:main Nov 5, 2024
52 checks passed
@mikekgfb
Copy link
Contributor

mikekgfb commented Nov 5, 2024

very WIP on the c++ implementation is on a branch now: https://github.com/gabe-l-hart/torchchat/tree/TokenizersCpp-1251

Once all the changes needed to support granite have landed, be sure to add the models to the known model json (added: just saw #1336 which does that) and the README.md model list, please?

Also, at that point when the granite models work with the code that's checked in, is there a smallish granite model (ideally without special license that needs to be accepted, avoiding having to deal with HF tokens as github secrets?) that could be run as end-to-end test?

@gabe-l-hart gabe-l-hart deleted the TokenizersTokenizer-1251 branch November 8, 2024 15:49
@gabe-l-hart
Copy link
Contributor Author

Also, at that point when the granite models work with the code that's checked in, is there a smallish granite model (ideally without special license that needs to be accepted, avoiding having to deal with HF tokens as github secrets?) that could be run as end-to-end test?

All Granite models (starting with the Granite Code ones) are under Apache-2. The smallest Granite Code is the 3b model which is admittedly not CI/CD sized. Once we start tackling the "granite" and "granitemoe" architectures (Granite 3.X), the HF team has also created tiny random test models that can be used to ensure the tensors flow, but don't produce any real output.

@gabe-l-hart
Copy link
Contributor Author

For the discussion around c++ support for HF tokenizers, I recently discovered the tokenizer implementation in mlc-llm which may be pretty close to exactly what we need. I'm not deeply familiar with the project (thus just finding it), but it could be an interesting starting point for a general-purpose c++ tokenizer solution. Another option would be to look into hoisting the llama.cpp tokenization into its own project, but that would likely require some significant untangling from their codebase and would be hard to maintain as the project evolves.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add support for tokenizers tokenizers
4 participants