-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Additional Topic Ideas / Suggestions #16
Comments
@briacht @aslotte @bartczernicki @jwood803 tagging you for visibility. It would also be great to get input from community members. |
Can go over some transforms that are available for pre-processing data. Maybe the custom mapping or expression transforms? |
First issue I ran into, was I thought I had the prereqs successfully installed but when we got to a place where I was going to use them I found out that I had missed and missed significantly enough that I had to try a few things before I got it working, luckily I could pause the workshop while I got that sorted out. The fix would have been to have included some test in the prereqs to prove you have it installed properly |
The other issue I ran into was the pace, after I got the prereqs in place the content felt really slow and the activity times were way too long. I ended up pausing the stream for an hour, doing something else and then coming back,. My suggested fix for this is to have two tracks / different workshops. One for people brand new to machine learning and ML .Net. The other for people that understand machine learning basics and want to dig into ML .Net, that could get much deeper than the current workshop. |
I think having some advanced scenarios/architectures that someone can just jump to or see some demos would be nice (This way you have a "forward looking", this is all the coolness you can do if you get started with ML.NET & this workshop). You can pivot that to "best practice tips" or "production considerations". It might not be the best fit here, but with .NET you have many more scenarios to go through as there are very few full stack R or Python systems built. (they exist, but are few and far between) |
@luisquintanilla I have not found your contact. Could you help me with one problema consuming a GPT2 ONNX model using ML.Net Transform? |
Hi @murilocurti I suspect this is your post on StackOverflow. Here is a sample that might help you get started. Decoding the outputs still isn't finished yet. Basically the part missing still is the post-processing Hope that helps. |
Hi @luisquintanilla !!!! |
@luisquintanilla !!! It almost workd :) I've followed your sample and I'm getting error when the predition is evaluated: predictions.First() See it below:
The program.cs file is at, and the exception is at line 52. In this article, Nikola commented the exception: Length of memory (12) must match product of dimensions (20).
What do you think? Thank you!!! |
When padding data there was an error in the original sample. This should do it. Note that I'm using the LM-HEAD model instead of the standard one because that one gives you the scores for each of the words in the vocabulary. using Microsoft.ML;
using Microsoft.ML.Tokenizers;
using Microsoft.ML.OnnxRuntime;
using Microsoft.ML.Data;
var ctx = new MLContext();
var vocabFilePath = "vocab.json";
var mergeFilePath = "merges.txt";
var onnxModelFilePath = "gpt2-lm-head-10.onnx";
var tokenizer = new Tokenizer(new Bpe(vocabFilePath, mergeFilePath), RobertaPreTokenizer.Instance);
var shape = new Dictionary<string, int[]>()
{
{"input1",new int[] {1,1,GPT2Settings.SeqLength}},
{"output1", new int[] {1,1,GPT2Settings.SeqLength,50257}}
};
var onnxPipeline =
ctx.Transforms.ApplyOnnxModel(
modelFile: onnxModelFilePath,
inputColumnNames: new[] { "input1" },
outputColumnNames: new[] { "output1" },
shapeDictionary: shape, gpuDeviceId: null, fallbackToCpu: true);
var inputs = new[] {
"The brown fox jumped over ",
};
var data =
inputs.Select(x => new ModelInput { OriginalInput = x, Ids = tokenizer.Encode(x).Ids.Select(n => (long)n).ToArray() });
var paddedData = data.Select(x => {
var len = x.Ids.Count();
var updatedInput = new ModelInput { OriginalInput = x.OriginalInput };
if(len >= GPT2Settings.SeqLength)
{
var truncatedArray = x.Ids.Take(GPT2Settings.SeqLength);
updatedInput.Ids = truncatedArray.ToArray();
}
else
{
var paddedArray = Enumerable.Repeat<long>(-50256L, GPT2Settings.SeqLength-len);
var combinedArray = x.Ids.Concat(paddedArray);
updatedInput.Ids = combinedArray.ToArray();
}
return updatedInput;
});
var test = paddedData.ToArray();
var idv = ctx.Data.LoadFromEnumerable(paddedData);
var output = onnxPipeline.Fit(idv).Transform(idv);
var prev = output.Preview();
var predictions = ctx.Data.CreateEnumerable<ModelOutput>(output, reuseRowObject: false);
IEnumerable<float> ApplySoftmax(IEnumerable<float> input)
{
var sum = input.Sum(x => (float)Math.Exp(x));
var softmax = input.Select(x => (float)Math.Exp(x) / sum);
return softmax.ToArray();
}
var nextWords =
predictions.ToArray()[0].Output1
.Chunk(50257)
.AsEnumerable()
.Select(x =>
{
var results =
ApplySoftmax(x)
.Select((c, i) => new { Index = i, Confidence = c })
.OrderByDescending(l => l.Confidence)
.Take(GPT2Settings.SeqLength)
.Select(l => new { Label = tokenizer.Decode(l.Index, true), Confidence = l.Confidence })
.First();
return results;
})
.ToArray();
var originalString = inputs.First();
var nextWordIdx = inputs.First().Split(' ').Count();
Console.WriteLine($"{originalString} {nextWords[nextWordIdx].Label}");
struct GPT2Settings
{
public const int SeqLength = 12;
}
public class ModelInput
{
public string OriginalInput { get; set; }
[ColumnName("input1")]
[VectorType(1, 1, GPT2Settings.SeqLength)]
public long[] Ids { get; set; }
}
public class ModelOutput
{
[ColumnName("output1")]
[VectorType(1 * 1 * GPT2Settings.SeqLength * 50257)]
public float[] Output1 { get; set; }
} |
@luisquintanilla It worked! I'm running with a lot work since the weekend, and I will return to you as soon as I have a good time. |
Hi @luisquintanilla finally I got time again here. How I said, it worked. Was possible to load the model and the response, but I can't understand why the response is, apparently, trucated or not currently decoded. Do you have any idea? See the outputs below: Thanks! input: .NET Conf is input: The brown dog jumped over input: My name is Luis and I like input: In the darkest depths of mordor output: C |
Looks like the problem is related to the tokenizer and merges file. I've re-downloaded the files and the content is ok.
And the result was:
|
@murilocurti thanks for looking into it! You're right. I totally missed that. Can you add a comment with the link to the files you used. Thank you. |
@luisquintanilla for sure! The direct download links are the following: https://huggingface.co/gpt2/raw/main/merges.txt Thank you! |
Happy to help. We're learning together here 🙂 |
Creating this issue to track and get input on ways to improve or expand the contents of this workshop.
Which topics should we dive deeper into?
What are additional topics you'd like to see?
Improvements
Topics
The text was updated successfully, but these errors were encountered: