Posts
-
Scientists rise up against statistical significance
-
Highly accurate protein structure prediction with AlphaFold
-
Multi-Layers attention-based explainability via transformers for tabular data
-
UNeXt: MLP-based Rapid Medical Image Segmentation Network
-
Diffusion Autoencoders: Toward a Meaningful and Decodable Representation
-
Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs
-
Medical PINN: non invasive blood pressure estimation
-
Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture
-
SAM 2: Segment Anything in Images and Videos
-
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
-
I-MedSAM: Implicit Medical Image Segmentation with Segment Anything
-
Dehazing Ultrasound using Diffusion Models
-
CoTracker: It is Better to Track Together
-
RePaint: Inpainting using Denoising Diffusion Probabilistic Models
-
ImageBind: One Embedding Space To Bind Them All
-
Unsupervised Blind Source Separation with Variational Auto-Encoders
-
Self-supervised Feature Learning for 3D Medical Images by Playing a Rubik’s Cube
-
Multi-modal Variational Autoencoders for normative modelling across multiple imaging modalities
-
Revisiting the Calibration of Modern Neural Networks
-
Topology-Aware Uncertainty for Image Segmentation
-
Brain Imaging Generation with Latent Diffusion Models
-
OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data
-
Image as Set of Points
-
High-resolution image synthesis with latent diffusion models
-
DETR : End-to-End Object Detection with Transformers
-
Stochastic Segmentation Networks: Modelling Spatially Correlated Aleatoric Uncertainty
-
ICCV 2023 - Selection of papers
-
NISF: Neural Implicit Segmentation Functions
-
A visual–language foundation model for pathology image analysis using medical Twitter
-
Segment Any Medical Image
-
Multi-Modal Masked Autoencoders for Medical Vision-and-Language Pre-Training
-
Towards Robust Interpretability with Self-Explaining Neural Networks
-
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
-
Shape-Aware Organ Segmentation by Predicting Signed Distance Maps
-
Deep Unfolded Robust PCA with Application to Clutter Suppression in Ultrasound
-
DALL-E 2 explained
-
Learning Loss for Active Learning
-
CLIP : Learning Transferable Visual Models From Natural Language Supervision
-
Adversarial Discriminative Domain Adaptation
-
Regularized Evolution for Image Classifier Architecture Search
-
UNesT: Local Spatial Representation Learning with Hierarchical Transformer for Efficient Medical Segmentation
-
Transformer Interpretability Beyond Attention Visualization
-
SAM: Segment Anything Model
-
Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage
-
Enhancing Pseudo Label Quality for Semi-Supervised Domain-Generalized Medical Image Segmentation
-
Vision Transformer with Deformable Attention
-
Fast Fourier Convolution
-
Variational Dropout and the Local Reparameterization Trick
-
Attribute-based regularization of latent spaces for variational auto-encoders
-
ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders
-
A ConvNet for the 2020s
-
Complementing Brightness Constancy with Deep Networks for Optical Flow Prediction
-
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
-
Fixing bias in reconstruction-based anomaly detection with Lipschitz discriminators
-
CD²-pFed: Cyclic Distillation-guided Channel Decoupling for Model Personalization in Federated Learning
-
What is being transferred in transfer learning?
-
Attention Bottlenecks for Multimodal Fusion
-
Curriculum Learning by Dynamic Instantaneous Hardness
-
Sparse Multi-Channel Variational Autoencoder for the Joint Analysis of Heterogeneous Data
-
Multi-Channel Stochastic Variational Inference for the Joint Analysis of Heterogeneous Biomedical Data in Alzheimer's Disease
-
FedAMP: Personalized Cross-Silo Federated Learning on Non-IID Data
-
Learning Maximally Monotone Operators for Image Recovery
-
Deformable Convolutional Networks
-
PerceptFlow: Real-time ultrafast Doppler image enhancement using CNNs and perceptual loss
-
Neighborhood Attention Transformer
-
What Do We Mean by Generalization in Federated Learning?
-
Representation learning for improved interpretability and classification accuracy of clinical factors from EEG
-
Token Merging: Your ViT But Faster
-
A hierarchical probabilistic U-Net for modeling multi-scale ambiguities
-
Improving Explainability of Disentangled Representations using Multipath-Attribution Mappings
-
Deep Unsupervised Learning using Nonequilibrium Thermodynamics
-
A probabilistic U-Net for the segmentation of ambiguous images
-
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
-
Momentum Residual Neural Networks
-
Masked Autoencoders Are Scalable Vision Learners
-
C2FTrans: Coarse-to-Fine Transformers for Medical Image Segmentation
-
Quantifying Attention Flow in Transformers
-
VT-ADL: A Vision Transformer Network for Image Anomaly Detection and Localization
-
UNETR: Transformers for 3D Medical Image Segmentation
-
Complex Convolutional Neural Networks for Image Reconstruction from IQ Signal
-
Fed2: Feature-Aligned Federated Learning
-
Automatic 3D+t four-chamber CMR quantification of the UK biobank: integrating imaging and non-imaging data priors at scale
-
Recursive refinement network for deformable lung registration
-
Flow over an espresso cup: Inferring 3D velocity and pressure fields from tomographic background oriented schlieren videos via physics-informed neural networks
-
Escaping the big data paradigm with compact transformers
-
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
-
Emerging Properties in Self-Supervised Vision Transformers
-
Welcome to Jekyll!
subscribe via RSS