Ramsey Theory RL is a project that seeks to use simple RL and a basic graph invariant to pseudo-intelligently search for counterexamples for unknown Ramsey number bounds. The main aims of the project are to either improve the bounds or to assist in finding more isomorphic graphs for known numbers to help future efforts.
Our algorithm is an informed Best First Search. At each iteration, all neighbors (1 changed edge away) to the current graph are numerically represented by their invariant. The invariant is counts of all 11 possible isomorphic 4-graphs inside of them. The neighbors with invariants not yet expanded are then passed through a heuristic which determines which graph to expand next. Past literature has used the count of 4-paths as a heuristic. We use a DNN with 1 hidden layer that we iteratively train on seen graphs, to encourage straying away from known areas.
The algorithm can pretrain on known graphs and can start from a random graphs, known counterexamples from a smaller
Our algorithm has a runtime of O(
-
Sign up with Neptune AI
-
Create a project called RamseyRL with a model called RAM-HEUR
-
Get your Neptune API token and name
-
Run the following
pip install RamseyTheoryRL pip install -r https://raw.githubusercontent.com/aLehav/RamseyTheoryRL/main/RamseyTheoryRL/requirements.txt --quiet
If there are dependency conflicts, run the following instead:
pip install RamseyTheoryRL pip install numpy==1.23 keras==2.12.0 matplotlib==3.7.1 neptune==1.3.1 \ networkx==3.1 pandas==1.5.3 python_igraph==0.10.5 scikit_learn==1.2.2 \ seaborn==0.12.2 tensorflow==2.12.0 tqdm==4.65.0 neptune_tensorflow_keras==2.1.1 \ --quiet
import os # Change these for your real token and username os.environ['NEPTUNE_API_TOKEN'] = 's0me!l0nGn3pTunEt0k3N=' os.environ['NEPTUNE_NAME'] = 'yourname'
from RamseyTheoryRL.src.ramsey_checker.test import NeptuneRunner import tensorflow as tf def project_fetcher(): return f"{os.environ.get('NEPTUNE_NAME')}/RamseyRL" runner = NeptuneRunner(n=8, s=3, t=4, project=project_fetcher()) new_params = runner.PARAMS changed_vals = {'iter_batch':20, 'iter_batches':50, 'starting_graph':"FROM_CURRENT", 'starting_graph_path':'/data/found_counters/r3_4_8_isograph.g6', 'starting_edges': True, 'load_model': False, 'profiler': True, 'alpha': 0, 'model': None, 'training_epochs': 5, 'epochs': 1, 'batch_size': 32, 'optimizer': 'adam', } new_params.update(changed_vals) runner.update_params(new_params)
runner.run()
Method shown in meant for a Google Colab environment
from google.colab import files files.upload()
File should be taken from McKay and Exoo's Database
import shutil # Make sure name matches that in 'starting_graph_path' uploaded_filename = "r3_4_8_isograph.g6" destination_path = ( "/usr/local/lib/python3.10/dist-packages/" "RamseyTheoryRL/data/found_counters/r3_4_8_isograph.g6" ) shutil.move(uploaded_filename, destination_path)
# Getting max index of starting graph # Only to be used when PARAMS['starting_graph] in ["FROM_PRIOR", "FROM_CURRENT"] import sys import networkx as nx def get_max_starting_index(runner): counters = nx.read_graph6(sys.path[-1] + runner.PARAMS['starting_graph_path']) counters = [counters] if type(counters) != list else counters return len(counters) for i in range(get_max_starting_index(runner)): runner.PARAMS['starting_graph_index'] = i runner.run()
- heurstic_type: Choose from RANDOM, 4PATH, DNN, SCALED_DNN
- iter_batch: Steps to take before updating model data / weights
- iter_batches: None if no stopping value, else num. of iter_batches
- starting_graph: Choose from RANDOM, FROM_PRIOR, FROM_CURRENT, EMPTY
- starting_graph_path: Path to starting graph
- starting_graph_index: Index of starting graph within starting graph index list
- starting_edges: Add new edges if starting FROM_PRIOR
- load_model: Whether or not to load a past version of the model
- profiler: Enable profiler
-
model: If passed, when using SCALED_DNN or DNN heuristic, the default model of a DNN with hidden layers of size 32, 16, and then 1 with optimizer=params['optimizer'], loss=params['loss'], metrics=['accuracy'] can be overridden for any tf model that has already had
model.compile
called - training_epochs: If pretraining on data, the number of epochs to train for
- pretrain: Whether or not pretraining occurs
- pretrain_data: A list of preprocessed csv's to be trained on
- epochs: Epochs to update the model for after each iteration of training
- batch_size: Batch size for training
- loss_info: Info regarding loss used, improves logging readability
-
alpha: Learning rate for DNN (i.e. if a counter was found k steps ago, a point that isn't a counter in this iteration would have a value of
$\alpha^k$ )
- Improving documentation and usability
- Removing and rearranging older content
- Integrating pip package to Getting Started portion