Skip to main content Link Menu Expand (external link) Document Search Copy Copied
Title Authors Synthesis Publisher Keywords
A Reinforcement Learning Based R-Tree for Spatial Data Indexing in Dynamic Environments Tu Gu, Sheng Wang, etc This paper presents how to generate R-Tree cuts using reinforcement learning instead of arbitrary huristics. The value function is formulated against a base tree which is used as reference. The reinforcement method learns to choose actions maximizing the ratio against query costs on the base tree. The supported actions are chooseSubTree and cutSubTree. The R-Tree is constructed by inserting data points. SIGMOD 2023 R-Tree, RL
PAW: Data Partitioning Meets Workload Variance Zhe Li, Man Lung Yiu, Tsz Nam Chan This paper presents tries to optimize QD-Tree which performs good on seen workload but may perform badly on queries with variances. By introducing future queries during training PAW generates cuts which allows for some variances for future queries. ICDE 2022 Partitioning, QD-Tree
Efficient Online Reinforcement Learning with Offline Data Phillip J. Ball, Laura Smith, Ilya Kostrikov, Sergey Levine This paper presents a method to directly use offline data in online reinforcement learning, and show offline data can be used in reinforcement learning without offline training. arXiv 2023 Reinforcement Learning, Offline Data
Vertical Partitioning for Database Design: A Graphical Algorithm Shamkant B. Navathe, Minyoung Ra This paper presents a nonparametric method to cluster table attributes using a new undirected graph method. This method can group nodes with similar edge values between them. SIGMOD 1989 Graph Algoritm, Vertical Partitioning