Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I include my machine-learning model in the python file? #66

Open
SeverinPaar opened this issue May 24, 2021 · 11 comments
Open

How can I include my machine-learning model in the python file? #66

SeverinPaar opened this issue May 24, 2021 · 11 comments

Comments

@SeverinPaar
Copy link

SeverinPaar commented May 24, 2021

I want to incorporate some machine-learning (no libraries of course), but since everything has to be in one file I am having trouble saving my model. I tried pickle but that does not work when the file is called from the tournament code. (also I don't think pickle is allowed)

Has anyone already figured this out? I would imagine there being multiple machine-learning approaches.

@SeverinPaar SeverinPaar changed the title Is pickle allowed? How can I include my machine-learning model How can I include my machine-learning model in the python file? May 24, 2021
@jun-bun
Copy link

jun-bun commented May 24, 2021

Assuming you wrote it without external libraries, you would just include the model and weights in the code.

@redtachyon2098
Copy link

Just put all of the data needed to reconstruct an exact copy of the model inside "memory", and load it up when the strategy function starts.

@carykh
Copy link
Owner

carykh commented May 25, 2021

I hadn't realized people would use ML, because my initial goal was to have people write strategies they could reasonably explain with their own human logic, so each script would mostly be a series of if-then statements. I suppose you could just save the weights in the code itself, like a giant array initialized at the beginning! If it's fewer than 10,000 weights, it should be manageable... I hope!

@redtachyon2098
Copy link

I made a simple Deep Q learning strategy, but it wasn't very good. So I'm now using a neural network to try and predict the opponent's next turn.

@redtachyon2098
Copy link

I hadn't realized people would use ML, because my initial goal was to have people write strategies they could reasonably explain with their own human logic, so each script would mostly be a series of if-then statements. I suppose you could just save the weights in the code itself, like a giant array initialized at the beginning! If it's fewer than 10,000 weights, it should be manageable... I hope!

Due to the content of your channel, I couldn't resist.

@ghost
Copy link

ghost commented May 25, 2021

I think this will ruin all the fun, it shouldn't be allowed

@ColinJPage
Copy link

I think this will ruin all the fun, it shouldn't be allowed

I agree that ML is against the spirit of the competition. But I think it will be cool to see what people come up with regardless. Training an effective ML strategy still requires accurate prediction of the population. I look forward to reading the top strategy designed by a human.

@HassanWahba
Copy link

you can't predict this anyway because there is no function that describes all algorithms at once.
if it gives good results than it's most probably an overfitting problem and won't generalize to other strategies in my humble opinion

@redtachyon2098
Copy link

redtachyon2098 commented May 27, 2021

ML is, at its most basic level, a function approximation algorithm. It is a function that has a lot of variables(In neural networks, these are the "weights" and "biases".) that can be changed to alter the function's behavior. Since you can't save values between games without cheating, unless your algorithm is unnecessarily complex, the model will have to train on the fly. This is what I did(which I didn't submit, by the way) and it would only do a bit better than a detective. But later, ML could become the meta, and at that point, victory would indeed be given to whoever trained it for longer initially so that it will be able to react faster and more efficiently against strategies. Personally I don't see that coming for a while. I believe that if a ML strat that was any good was found any time soon, it would use it to assist a deterministic, main algorithm, rather than using it against strategies directly.

@TheGoudaMan
Copy link

TheGoudaMan commented May 28, 2021

I don't think ML is going to cut it because strategy performance fully depends on the whole enviroment, and you should strategize upon enviroment, which is unknown.
Not only this, but also your strategy doesn't have memory over previous opponents, so you can't adapt your strat to figure out the enviroment, which makes ML pretty much useless IMO.

I suppose you can train it against a variety of different strats, your idea might be "Okay I'll train it against random then I'll train it against tft then ..." and then basically train it against all kind of strats, but then there always will be non 0 chance of your strat to try to exploit something since some strats are exploitable. In this case ML strat will probably fail in enviroment with only TFT strats involved.

@redtachyon2098
Copy link

That's why I said ML isn't viable at the moment. But if someone actually trains a good and complex-enough ML algorithm(Maybe some form of Reinforcement Learning algorithm? Not sure) for a very long time with a myriad of strategies, just like you said, it has some chance of reining supreme. But for now, the best it can do is adapting to an opposing strategy quickly, and finding a defense, and even that won't be particularly good. Like I have said above, ML's only real use case right now is to maybe classify different strategies to decrease false positives or false negatives in a detective strategy. We'll just have to see. We never know what will happen in the future. Maybe ML would become an unbeatable meta, or maybe it will fade into obscurity, even more than it is now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants