Traveling Salesman Problem

79 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Traveling Salesman Problem models and implementations

Most implemented papers

Neural Combinatorial Optimization with Reinforcement Learning

pemami4911/neural-combinatorial-rl-pytorch 29 Nov 2016

Despite the computational expense, without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes.

Differentiation of Blackbox Combinatorial Solvers

martius-lab/blackbox-backprop ICLR 2020

Achieving fusion of deep learning with combinatorial algorithms promises transformative changes to artificial intelligence.

Evolution of Heuristics: Towards Efficient Automatic Algorithm Design Using Large Language Model

feiliu36/eoh 4 Jan 2024

EoH represents the ideas of heuristics in natural language, termed thoughts.

RL4CO: an Extensive Reinforcement Learning for Combinatorial Optimization Benchmark

pytorch/rl 29 Jun 2023

To fill this gap, we introduce RL4CO, a unified and extensive benchmark with in-depth library coverage of 23 state-of-the-art methods and more than 20 CO problems.

Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph Transformers

shamim-hussain/tgt 7 Feb 2024

We also obtain SOTA results on QM9, MOLPCBA, and LIT-PCBA molecular property prediction benchmarks via transfer learning.

Combinatorial Optimization by Graph Pointer Networks and Hierarchical Reinforcement Learning

qiang-ma/graph-pointer-network 12 Nov 2019

Furthermore, to approximate solutions to constrained combinatorial optimization problems such as the TSP with time windows, we train hierarchical GPNs (HGPNs) using RL, which learns a hierarchical policy to find an optimal city permutation under constraints.

Exploring the Loss Landscape in Neural Architecture Search

naszilla/naszilla 6 May 2020

In this work, we show that (1) the simplest hill-climbing algorithm is a powerful baseline for NAS, and (2), when the noise in popular NAS benchmark datasets is reduced to a minimum, hill-climbing to outperforms many popular state-of-the-art algorithms.

Learning to Iteratively Solve Routing Problems with Dual-Aspect Collaborative Transformer

yining043/VRP-DACT NeurIPS 2021

Moreover, the positional features are embedded through a novel cyclic positional encoding (CPE) method to allow Transformer to effectively capture the circularity and symmetry of VRP solutions (i. e., cyclic sequences).

Backpropagation through Combinatorial Algorithms: Identity with Projection Works

martius-lab/solver-differentiation-identity 30 May 2022

Embedding discrete solvers as differentiable layers has given modern deep learning architectures combinatorial expressivity and discrete reasoning capabilities.