<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Projects | ISLab, the University of Osaka</title><link>http://is.d3c.osaka-u.ac.jp/en/project/</link><atom:link href="http://is.d3c.osaka-u.ac.jp/en/project/index.xml" rel="self" type="application/rss+xml"/><description>Projects</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Fri, 03 Apr 2026 00:00:00 +0900</lastBuildDate><item><title>JST Top ASPIRE 'Deep Sensing: Jointly optimizing sensing and processing for CV/AI applications'</title><link>http://is.d3c.osaka-u.ac.jp/en/project/aspire/</link><pubDate>Fri, 03 Apr 2026 00:00:00 +0900</pubDate><guid>http://is.d3c.osaka-u.ac.jp/en/project/aspire/</guid><description>&lt;p>This collaborative research aims to generalize the &amp;ldquo;deep sensing&amp;rdquo; framework, which jointly optimizes both sensing and processing, and demonstrate its feasibility and efficiency across a wide range of computer vision and AI applications. Our team comprises leading research groups in Computational Photography and Computer Vision. We collaboratively develop a new computational camera system and apply it to various downstream computer vision tasks to achieve high accuracy, low data acquisition requirements, and reduced computational costs. Through our joint efforts, we aim to establish the &amp;ldquo;deep sensing&amp;rdquo; framework as a standard approach in the computer vision and AI communities. Additionally, by promoting collaborative and complementary research and encouraging researcher mobility, this project seeks to strengthen international research networks between the US/Canada and Japan.&lt;/p></description></item><item><title>Construction of an Integrated Analysis Platform for Vision and Omics</title><link>http://is.d3c.osaka-u.ac.jp/en/project/vision-omics/</link><pubDate>Thu, 02 Apr 2026 00:00:00 +0900</pubDate><guid>http://is.d3c.osaka-u.ac.jp/en/project/vision-omics/</guid><description>&lt;h2 id="construction-of-an-integrated-analysis-platform-for-vision-and-omics">Construction of an Integrated Analysis Platform for Vision and Omics&lt;/h2>
&lt;p>In life science and medical research, both image data (Vision)—such as microscopic and pathological images—and omics data (Omics)—such as gene expression levels and protein abundance—are often used.&lt;/p>
&lt;p>Microscopic and pathological images provide information about cellular and tissue morphology and spatial structure, whereas omics data comprehensively captures molecular-level states such as genes and proteins. These data types describe biological systems from different perspectives, and integrating them enables a deeper understanding of biological phenomena.&lt;/p>
&lt;p>In recent years, advances in technologies such as spatial transcriptomics have made it possible to obtain gene expression data along with spatial information within tissues, leading to active research on the integrated analysis of image and molecular data. However, because image data and omics data differ significantly in format and dimensionality, effectively integrating them is not straightforward.&lt;/p>
&lt;p>In this research, we develop methods for integrated analysis of image and omics data using image analysis techniques and machine learning. For example, by associating morphological features extracted from tissue images with gene expression data, we aim to deepen the understanding of cellular states and disease mechanisms. Through such integrated analysis of Vision and Omics, we seek to generate new insights that contribute to our understanding of diseases and to research on diagnosis and treatment.&lt;/p></description></item><item><title>Label-efficient Medical Image Analysis</title><link>http://is.d3c.osaka-u.ac.jp/en/project/label-efficient-mia/</link><pubDate>Thu, 02 Apr 2026 00:00:00 +0900</pubDate><guid>http://is.d3c.osaka-u.ac.jp/en/project/label-efficient-mia/</guid><description>&lt;h2 id="label-efficient-medical-image-analysis">Label-efficient Medical Image Analysis&lt;/h2>
&lt;p>We focus on medical images acquired for disease diagnosis and treatment—such as pathological images, microscopic images, fundus images, and endoscopic images—and conduct research on medical image analysis aimed at identifying lesion locations and classifying disease types using machine learning techniques.&lt;/p>
&lt;p>With the advent of deep learning, recognition technologies have rapidly advanced, and high-accuracy analysis is becoming achievable when sufficient training data is available.&lt;/p>
&lt;p>However, there is a major challenge in medical image analysis: it is difficult to prepare a large amount of high-quality labeled data (annotations). Labeling medical images must be performed by physicians or specialists, requiring substantial time and effort. For example, for pathological images, tumor regions must be precisely annotated at the pixel level, and cell-level annotation requires expert knowledge and long hours of work.&lt;/p>
&lt;p>Against this background, our laboratory is working on label-efficient learning, which aims to achieve high-accuracy analysis with limited labeled data. Specifically, we develop methods such as semi-supervised learning, which leverages a small amount of labeled data together with a large amount of unlabeled data, and weakly supervised learning, which learns from coarse labels, and apply them to medical image analysis. Through these approaches, we aim to significantly reduce annotation costs while developing medical image analysis technologies that can be used in real clinical settings.&lt;/p></description></item><item><title>Optimization of Physical Encoders for Recognition Tasks and an Application to Pathological Diagnosis</title><link>http://is.d3c.osaka-u.ac.jp/en/project/kibans-23h05490/</link><pubDate>Sat, 20 May 2023 00:00:00 +0900</pubDate><guid>http://is.d3c.osaka-u.ac.jp/en/project/kibans-23h05490/</guid><description/></item><item><title>Computational Optical Imaging for Endoscopic Surgery</title><link>http://is.d3c.osaka-u.ac.jp/en/project/kiban_s-plenoptic/</link><pubDate>Sat, 01 Oct 2022 00:00:00 +0900</pubDate><guid>http://is.d3c.osaka-u.ac.jp/en/project/kiban_s-plenoptic/</guid><description/></item><item><title>Knowledge VQA</title><link>http://is.d3c.osaka-u.ac.jp/en/project/kiban_b-kvqa/</link><pubDate>Wed, 01 Jul 2020 10:13:28 +0900</pubDate><guid>http://is.d3c.osaka-u.ac.jp/en/project/kiban_b-kvqa/</guid><description>&lt;p>Visual question answering (VQA) with knowledge is a task that requires knowledge to answer questions on images/video. This additional requirement of knowledge poses an interesting challenge on top of the classic VQA tasks. Specifically, a system needs to explore external knowledge sources to answer the questions correctly, as well as understanding the visual content.&lt;/p>
&lt;p>We created &lt;a href="https://knowit-vqa.github.io" target="_blank" rel="noopener">a dedicated dataset for our knowledge VQA task&lt;/a> and made it open to the public so that everyone can enjoy our new task. The results are presented at &lt;a href="https://aaai.org/Conferences/AAAI-20/" target="_blank" rel="noopener">AAAI 2020&lt;/a>.&lt;/p>
&lt;p>Representation of videos has been a major research topic for various deep learning applications including visual question answering. This is a challenging problem especially for tasks that involves vision and language and some researchers pointed out that deep neural network-based models mainly use natural language text but not the vision. We propose to use textual representation of videos, in which SOTA models for detection/recognition are used for generating text together with some rules. The results are presented at &lt;a href="https://eccv2020.eu" target="_blank" rel="noopener">ECCV 2020&lt;/a>.&lt;/p>
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe src="https://www.youtube.com/embed/KCUUvSpf-Qo" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" allowfullscreen title="YouTube Video">&lt;/iframe>
&lt;/div>
&lt;br/>
&lt;p>We also work on question answering on art, which requires high-level understanding of paintings themselves as well as associated knowledge on them.&lt;/p>
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
&lt;iframe src="https://www.youtube.com/embed/I78SoOkH3dM" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" allowfullscreen title="YouTube Video">&lt;/iframe>
&lt;/div>
&lt;h3 id="publications">Publications&lt;/h3>
&lt;ul>
&lt;li>Noa Garcia, Chentao Ye, Zihua Liu, Qingtao Hu, Mayu Otani, Chenhui Chu, Yuta Nakashima, and Teruko Mitamura (2020). &lt;a href="https://arxiv.org/abs/2008.12520" target="_blank" rel="noopener">A Dataset and Baselines for Visual Question Answering on Art&lt;/a>. Proc. European Computer Vision Conference Workshops.&lt;/li>
&lt;li>Noa Garcia and Yuta Nakashima (2020). &lt;a href="https://arxiv.org/abs/2007.08751" target="_blank" rel="noopener">Knowledge-Based VideoQA with Unsupervised Scene Descriptions&lt;/a>. Proc. European Conference on Computer Vision.&lt;/li>
&lt;li>Noa Garcia, Mayu Otani, Chenhui Chu, and Yuta Nakashima (2020). KnowIT VQA: Answering knowledge-based questions about videos. Proc. AAAI Conference on Artificial Intelligence.&lt;/li>
&lt;li>Zekun Yang, Noa Garcia, Chenhui Chu, Mayu Otani, Yuta Nakashima, and Haruo Takemura (2020). BERT representations for video question answering. Proc. IEEE Winter Conference on Applications of Computer Vision.&lt;/li>
&lt;li>Noa Garcia, Chenhui Chu, Mayu Otani, and Yuta Nakashima (2019). Video meets knowledge in visual question answering. MIRU.&lt;/li>
&lt;li>Zekun Yang, Noa Garcia, Chenhui Chu, Mayu Otani, Yuta Nakashima, and Haruo Takemura (2019). Video question answering with BERT. MIRU.&lt;/li>
&lt;/ul></description></item><item><title>Australian History in Newspaper and AI</title><link>http://is.d3c.osaka-u.ac.jp/en/project/australian-history/</link><pubDate>Wed, 01 Jul 2020 10:13:06 +0900</pubDate><guid>http://is.d3c.osaka-u.ac.jp/en/project/australian-history/</guid><description>&lt;p>In collaboration with &lt;a href="http://www.let.osaka-u.ac.jp/seiyousi/fujikawa.html" target="_blank" rel="noopener">Prof. Fujikawa&lt;/a> at Graduate School of Letters, the University of Osaka, we are working on exploring Australian history through public meetings, of which call for participation appears in newspapers from back then.&lt;/p>
&lt;p>We explore ways to analyze such newspapers with state-of-the-art technologies in NLP to make OCR output better and to automatically detect/structure call for participation.&lt;/p></description></item><item><title>MLPhys: Foundation of Machine Learning Physics</title><link>http://is.d3c.osaka-u.ac.jp/en/project/mlphys/</link><pubDate>Wed, 01 Jul 2020 10:12:37 +0900</pubDate><guid>http://is.d3c.osaka-u.ac.jp/en/project/mlphys/</guid><description>&lt;p>Throughout its long history, physics has provided the most precise testing ground in the natural sciences, solving problems in various natural hierarchies in collaboration with the mathematical sciences.&lt;/p>
&lt;p>On the other hand, the field of machine learning is a major research field, a mathematical system that forms the foundation of artificial intelligence and has seen explosive progress in recent years due to advances in computational science. We are launching the transformative research area &amp;ldquo;Machine Learning Physics&amp;rdquo; to integrate these two major fields.&lt;/p>
&lt;p>For more detail, &lt;a href="https://mlphys.scphys.kyoto-u.ac.jp/en/" target="_blank" rel="noopener">visit here&lt;/a>.&lt;/p></description></item><item><title>CREST 3D Image Recognition AI for Cancer Diagnosis Support</title><link>http://is.d3c.osaka-u.ac.jp/en/project/crest-3d-cancer/</link><pubDate>Wed, 01 Jul 2020 10:08:48 +0900</pubDate><guid>http://is.d3c.osaka-u.ac.jp/en/project/crest-3d-cancer/</guid><description/></item><item><title>Society 5.0 Projects</title><link>http://is.d3c.osaka-u.ac.jp/en/project/society5_0/</link><pubDate>Wed, 01 Jul 2020 10:07:15 +0900</pubDate><guid>http://is.d3c.osaka-u.ac.jp/en/project/society5_0/</guid><description>&lt;p>D3 Center, the University of Osaka is now working on &lt;a href="https://www.ildi.ids.osaka-u.ac.jp/" target="_blank" rel="noopener">Society 5.0&lt;/a> using information science and technology.&lt;/p>
&lt;blockquote>
&lt;p>In the world of Society 5.0, innovations in IoT, Big Data, robotics, and AI will be part of everyday life, helping people lead active and high-quality lives, creating a super-smart society. This project encourages collaboration across projects and university organizations, thus promoting faster adoption of research results in real-world society.&lt;/p>
&lt;/blockquote>
&lt;p>Under this big project, we are working two sub-projects:&lt;/p>
&lt;h2 id="social-sensing-for-society-50">Social sensing for Society 5.0&lt;/h2>
&lt;p>Sensing technologies that aggregate various types of information from publicly and ubiquitously available social probes including social networking services are essential for providing various services in the world of Society 5.0. We are working towards establishing a social sensing technology that can infer emotional states of people for timely services.&lt;/p>
&lt;h2 id="future-school-technology-for-society-50">Future school technology for Society 5.0&lt;/h2>
&lt;p>Education everywhere is one of SDGs, and e-learning is one solution toward this. We collectively working on a broad range of technologies related to e-learning.&lt;/p></description></item><item><title>Esthetic Dentistry and Optical Analysis</title><link>http://is.d3c.osaka-u.ac.jp/en/project/esthetic-dentistry/</link><pubDate>Wed, 01 Jul 2020 10:06:58 +0900</pubDate><guid>http://is.d3c.osaka-u.ac.jp/en/project/esthetic-dentistry/</guid><description>&lt;p>There have been ever-increasing demands for esthetics in the oral cavity after tooth restorations. Especially in the anterior, functional restoration is not enough; restored teeth should have similar color tones and light transmissions to natural teeth because the appearance of restored teeth when exposed to light varies greatly depending on the material used for the crown restoration device and the abutment structure.&lt;/p>
&lt;p>Therefore, by analyzing the optical properties of various dental restoration materials and dental tissues via optical simulation, the behavior of light in natural teeth and dental restorations is visualized and analyzed for esthetic tooth restorations.&lt;/p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="" srcset="
/en/project/esthetic-dentistry/bs02_ja_hufa2c9813ab5ec072b2ff3aff4129a81c_207794_ae3ceeefde8ab5ab2a397b0279f09020.webp 400w,
/en/project/esthetic-dentistry/bs02_ja_hufa2c9813ab5ec072b2ff3aff4129a81c_207794_d4d31e3696df361371c4ae618e3ee553.webp 760w,
/en/project/esthetic-dentistry/bs02_ja_hufa2c9813ab5ec072b2ff3aff4129a81c_207794_1200x1200_fit_q75_h2_lanczos_3.webp 1200w"
src="http://is.d3c.osaka-u.ac.jp/en/project/esthetic-dentistry/bs02_ja_hufa2c9813ab5ec072b2ff3aff4129a81c_207794_ae3ceeefde8ab5ab2a397b0279f09020.webp"
width="654"
height="272"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure></description></item><item><title>Brain Pharmaceutics</title><link>http://is.d3c.osaka-u.ac.jp/en/project/brain-pharmaceutics/</link><pubDate>Wed, 01 Jul 2020 10:06:11 +0900</pubDate><guid>http://is.d3c.osaka-u.ac.jp/en/project/brain-pharmaceutics/</guid><description>&lt;p>Coming soon&lt;/p></description></item><item><title>AI Hospital</title><link>http://is.d3c.osaka-u.ac.jp/en/project/ai-hospital/</link><pubDate>Wed, 01 Jul 2020 10:05:12 +0900</pubDate><guid>http://is.d3c.osaka-u.ac.jp/en/project/ai-hospital/</guid><description>&lt;p>The University of Osaka Medical Hospital has launched &lt;a href="https://www.hosp.med.osaka-u.ac.jp/english/departments/ai.html" target="_blank" rel="noopener">Artificical Intelligence Center for Medical Research and Application (AIM)&lt;/a>, which supports physicians, nurses, and all the medical staff collaborating with medical information specialists and data scientists to boost the medical application of AI in daily practices of the hospital.&lt;/p>
&lt;p>We are collaborating with AIM to provide cutting-edge technologies.&lt;/p>
&lt;h2 id="ophthalmology-and-ai">Ophthalmology and AI&lt;/h2>
&lt;p>In ophthalmology or any other departments, vessels in retinal fundus images provide rich information on the cardiovascular system of human bodies. We proposed a state-of-the-art method, coined &lt;a href="publication/li-2020-a/">IterNet&lt;/a> for extracting vessels from retinal fundus images as in the (e) in the figure below.&lt;/p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="" srcset="
/en/project/ai-hospital/iternet_hu5ea4eead4f7a5d7e4e3c54a3ef2dbf33_96994_14aa93646f0821df7d613d88e79dbc37.webp 400w,
/en/project/ai-hospital/iternet_hu5ea4eead4f7a5d7e4e3c54a3ef2dbf33_96994_f032cd7befcad180b57482de4147b2d9.webp 760w,
/en/project/ai-hospital/iternet_hu5ea4eead4f7a5d7e4e3c54a3ef2dbf33_96994_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="http://is.d3c.osaka-u.ac.jp/en/project/ai-hospital/iternet_hu5ea4eead4f7a5d7e4e3c54a3ef2dbf33_96994_14aa93646f0821df7d613d88e79dbc37.webp"
width="720"
height="380"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;p>On top of this technology, we also invented a new method for classifying vessels into artery/vein, which takes a two-step approach: We firstly segment vessels in input images using an IterNet-based method and then classify them into artery/vein with some post-processing.&lt;/p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="" srcset="
/en/project/ai-hospital/segmentation_hu6603ce41fa995fbca303679a56d9cb90_58078_c1a85c6a6f5a794031156d35393845fa.webp 400w,
/en/project/ai-hospital/segmentation_hu6603ce41fa995fbca303679a56d9cb90_58078_c24c2815ac58ceba70ec30f1fd4770f1.webp 760w,
/en/project/ai-hospital/segmentation_hu6603ce41fa995fbca303679a56d9cb90_58078_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="http://is.d3c.osaka-u.ac.jp/en/project/ai-hospital/segmentation_hu6603ce41fa995fbca303679a56d9cb90_58078_c1a85c6a6f5a794031156d35393845fa.webp"
width="720"
height="371"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure></description></item><item><title>Law and AI</title><link>http://is.d3c.osaka-u.ac.jp/en/project/green_law/</link><pubDate>Wed, 17 Jun 2020 23:02:32 +0900</pubDate><guid>http://is.d3c.osaka-u.ac.jp/en/project/green_law/</guid><description>&lt;p>In collaboration with &lt;a href="https://researchmap.jp/read0180483?lang=en" target="_blank" rel="noopener">Prof. Noriko Okubo&lt;/a> at Graduate School of Law and Politics, the University of Osaka, we are studying to automatically evaluate how green laws are enforced in different countries.&lt;/p>
&lt;p>Green laws&amp;rsquo; participation principle consists of 1) the information access right, 2) participation in the policy decision process, 3) the judicial access; however, actual implementation varies country to country, and legal methodologies have been explored for evaluating their effectiveness. This work investigates legal evaluation criteria on the green laws&amp;rsquo; participation principle, analyzes Japanese participation system&amp;rsquo;s pros and cons in a comparative perspective, and propose some recommendations to establish the environmental democracy.&lt;/p>
&lt;p>The difficulty lies in how to automatically find out related legislations, cases, statutes, etc. in different languages. As the first attempt, we proposed a method for identifying the topic of such legal documents through analyzing citation networks in addition to classic topic modeling. The figure below shows citation networks among different types of legal documents (e.g., cases-prior cases).&lt;/p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="" srcset="
/en/project/green_law/citation_networks_huab3f0c6cd7d27657c2614c3ccbddca1e_853493_6bd0b6f7aeac5ccd955a2e2929e321f6.webp 400w,
/en/project/green_law/citation_networks_huab3f0c6cd7d27657c2614c3ccbddca1e_853493_7148563d55407c04d89fe3576bc0a251.webp 760w,
/en/project/green_law/citation_networks_huab3f0c6cd7d27657c2614c3ccbddca1e_853493_1200x1200_fit_q75_h2_lanczos_3.webp 1200w"
src="http://is.d3c.osaka-u.ac.jp/en/project/green_law/citation_networks_huab3f0c6cd7d27657c2614c3ccbddca1e_853493_6bd0b6f7aeac5ccd955a2e2929e321f6.webp"
width="760"
height="248"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure></description></item><item><title>Buddha Face and AI</title><link>http://is.d3c.osaka-u.ac.jp/en/project/buddha-face/</link><pubDate>Wed, 17 Jun 2020 22:52:41 +0900</pubDate><guid>http://is.d3c.osaka-u.ac.jp/en/project/buddha-face/</guid><description>&lt;p>In collaboration with &lt;a href="http://www.dma.jim.osaka-u.ac.jp/view?l=en&amp;amp;u=6617" target="_blank" rel="noopener">Prof. Fujioka&lt;/a> with Graduate School of Letters/School of Letters, the University of Osaka, we are attempting to create an AI for analyzing various aspects of Buddha faces in images.&lt;/p>
&lt;p>Focusing on the face of the Buddha image, i.e., &amp;ldquo;Buddha face&amp;rdquo;, we analyze the characteristics of the style of each region, era, and author using statistical and machine learning approaches based on images and 3D geometric data, building a genealogy of Buddha faces. This is to realize style judgment based on the knowledge obtained from data, not based on the experience of art historians, which promotes the globalization of the Buddha statue research and also helps to identify the genealogy of Buddha faces propagated through the Silk Road, giving a new perspective on the spread of culture in Asia.&lt;/p>
&lt;p>We have built several interfaces to browse through a large corpus of precious Buddha faces for facilitating annotations on the basic meta-data on the statues, which will then serve as a source to train more sophisticated models for analyzing them.&lt;/p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="" srcset="
/en/project/buddha-face/interfaces_hu412127b82145130d68f689675871a563_688796_f7ba05e1845e768dc83a866faf698b62.webp 400w,
/en/project/buddha-face/interfaces_hu412127b82145130d68f689675871a563_688796_32cc5b04574eb12d887c0669bd74a304.webp 760w,
/en/project/buddha-face/interfaces_hu412127b82145130d68f689675871a563_688796_1200x1200_fit_q75_h2_lanczos_3.webp 1200w"
src="http://is.d3c.osaka-u.ac.jp/en/project/buddha-face/interfaces_hu412127b82145130d68f689675871a563_688796_f7ba05e1845e768dc83a866faf698b62.webp"
width="760"
height="172"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;p>For example, we built a model that can embed various information on target entities (i.e., Buddha status), such as authors, eras, places, etc., into a vector representation of images and use them for other tasks like classification, through the model below.&lt;/p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="" srcset="
/en/project/buddha-face/contextnet_hu255c0008cb8fe6340d26b769ee3d244e_313842_798b6597a98e287cd696b24527b015e3.webp 400w,
/en/project/buddha-face/contextnet_hu255c0008cb8fe6340d26b769ee3d244e_313842_2ee48281551faa1df1517389c5660b8f.webp 760w,
/en/project/buddha-face/contextnet_hu255c0008cb8fe6340d26b769ee3d244e_313842_1200x1200_fit_q75_h2_lanczos_3.webp 1200w"
src="http://is.d3c.osaka-u.ac.jp/en/project/buddha-face/contextnet_hu255c0008cb8fe6340d26b769ee3d244e_313842_798b6597a98e287cd696b24527b015e3.webp"
width="760"
height="336"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure></description></item></channel></rss>