From b35fd2fa10d0854e0de469d984b665b1e3b4558b Mon Sep 17 00:00:00 2001
From: Jianwei Yang <jwyang@users.noreply.github.com>
Date: Thu, 27 Jul 2023 23:48:17 -0700
Subject: [PATCH] Update README.md

---
 README.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/README.md b/README.md
index 7cf0c2e..bfccf4d 100644
--- a/README.md
+++ b/README.md
@@ -33,6 +33,7 @@ git clone git@github.com:UX-Decoder/Segment-Everything-Everywhere-All-At-Once.gi
 * [LLaVA](https://github.com/haotian-liu/LLaVA) : Large Language and Vision Assistant.
 
 ## :rocket: Updates
+* **[2023.07.27]** :roller_coaster: We are excited to release our [X-Decoder](https://github.com/UX-Decoder/X-Decoder) training code! We will release its descendant SEEM training code very soon!
 * **[2023.07.10]** We release [Semantic-SAM](https://github.com/UX-Decoder/Semantic-SAM), a universal image segmentation model to enable segment and recognize anything at any desired granularity. Code and checkpoint are available!
 * **[2023.05.02]** We have released the [SEEM Focal-L](https://projects4jw.blob.core.windows.net/x-decoder/release/seem_focall_v1.pt) and [X-Decoder Focal-L](https://projects4jw.blob.core.windows.net/x-decoder/release/xdecoder_focall_last.pt) checkpoints and [configs](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once/blob/main/demo_code/configs/seem/seem_focall_lang.yaml)!
 * **[2023.04.28]** We have updated the [ArXiv](https://arxiv.org/pdf/2304.06718.pdf) that shows *better interactive segmentation results than SAM*, which trained on x50 more data than us!