検索メイニアック!3 Information Retrieval Maniac III

A blog by Tetsuya Sakai, lead researcher at Microsoft Research Asia, Beijing, China.


Mar 20 EVIA2013 submissions due
Apr 15 SIGIR2012 notification
Apr 17 FIT2013 submissions due
Apr 24 EVIA2013 notification
May 01 NTCIR-10 camera ready
May 08 EVIA2013 camera ready
May 10 NTCIR-11 task proposals due
May 10 CIKM2013 abstracts due
May 17 CIKM2013 submissions due
Jun 07 NTCIR-11 notification of accepted tasks
Jun 18 EVIA2013 http://research.nii.ac.jp/ntcir/evia2013/
Jun 18-21 NTCIR-10 http://research.nii.ac.jp/ntcir/ntcir-10/
Jun 21 FIT2013 notification
Jun 23 AIRS2013 submissions due
Jun 22-24 ASSIA2013 http://www.kc.tsukuba.ac.jp/assia2013/
Jul 01 FIT2013 camera ready
Jul 15 CIKM2013 notification
Jul 28 AIRS2013 notification
Jul 28-Aug 01 SIGIR2013[Dublin] http://sigir2013.ie/
Aug 11 CIKM2013 camerea ready
Aug 11 AIRS2013 camera ready
Sep 04-06 FIT2013[Tottori] http://www.ipsj.or.jp/event/fit/fit2013/
Oct 27-Nov 01 CIKM2013[San Francisco] http://www.cikm2013.org/
Dec 09-11 AIRS2013[Singapore] http://www.colips.org/conference/airs2013/
Jan 31, 2014 NTCIR-11 additional pilot task proposals due
Feb 28, 2014 NTCIR-11 notification of accepted additional pilot tasks

いいね!した人  |  コメント(0)
最近の画像つき記事  もっと見る >>


(DEADLINE: May 10, 2013)

NTCIR (NII Testbeds and Community for Information access Research)is a
sesquiannual series of evaluation conferences that mainly focuses on
Asian language information access. The first NTCIR conference
(NTCIR-1) took place in August/September 1999, and the tenth
(NTCIR-10) will take place in June 2013. Research teams from all over
the world participate in one or more of the NTCIR tasks to advance the
state of the art and to learn from one another's experiences.

As the tasks being run at NTCIR-10 are reaching the final stages, it
is time to call for tasks for the next NTCIR (NTCIR-11) which will be
concluded in December 2014. Task proposals will be reviewed by the
NTCIR Programme Commitee, following the schedule below:


May 10, 2013 Deadline for task proposals (new and existing tasks)
June 7, 2013 Notification of accepted tasks
June 18-21, 2013 Task planning sessions at the NTCIR-10 conference
January 31, 2014 Deadline for additional pilot task proposals
February 28, 2014 Notification of accepted additional pilot tasks

Organisers of exising NTCIR-10 tasks (CrossLink, INTENT, 1CLICK,
PatentMT, RITE, SpokenDoc, MATH and MedNLP) are required to submit a
new proposal if they wish to continue them for NTCIR-11. New task
proposals are also very welcome. To organise an evaluation task is to
identify important research problems, tackle them strategically by
collaborating with other researchers (participants), build the
necessary evaluation framework to advance the state of the art, and
make an impact to the research community and to the future.

We will accept two types of tasks:

- Core challenge task: this is for fostering research on a particular
information access problem by providing researchers with a common
ground for evaluation. New test collections and evaluation methods
may be developed through the collaboration between task organisers
(proposers) and the task participants. At NTCIR-10, the core
challenge tasks were: CrossLink, INTENT, 1CLICK, PatentMT, RITE and

- Pilot task: this is recommended if the information access problem to
be tackled is new and there are uncertainties as to how to evaluate
it. It may focus on a subproblem of an information access
problem. It may attract a smaller group of participating teams than
core challenge task, but may grow into a core challenge task in the
next round of NTCIR. At NTCIR-10, the pilot tasks were: MATH and

NTCIR will provide a "seed funding" to each accepted task, which can
be used for some limited purposes such as hiring relevance assessors.
However, organisers of each task are expected to make the task as
self-sustaining as possible. The amount allocated to each task will
vary depending on requirements and the total number of accepted tasks,
but typical cases would be: around 1000000JPY/year for a core
challenge task and around 500000JPY/year for a pilot task.

Please submit your task proposal as a pdf to the following easychair site
by May 10, 2013 (Japan Time: UTC+9).

As indicated in the important dates, there will be another call for
additional pilot tasks after the NTCIR-10 conference, to accomodate
"late-breaking information access problems."


Main part (a single A4 page)
- Task name and short name
- Task type (core challenge or pilot)
- Abstract
- Motivation
- Methodology
- Expected results

- Names and contact information of the organisers
- Prospective participants
- Data to be used and/or constructed
- Budget planning
- Schedule
- Other notes


- Importance of the task to the information access community and to the society
- Timeliness of the task
- Organisers' commitment in ensuring a successful task
- Financial sustainability (self-sustainable tasks are encouraged)
- Soundness of the evaluation methodology
- Language scope


Hsin-Hsi Chen (National Taiwan University, Taiwan)
Charles Clarke (University of Waterloo, Canada)
Kalervo Jarvelin (University of Tampere, Finland)
Gareth Jones (Dublin City University, Ireland)
Gary Geunbae Lee (POSTECH, South Korea)
Maarten de Rijke (University of Amsterdam, The Netherlands)
Stephen Robertson (Microsoft Research Cambridge, UK)
Ian Soboroff (NIST, US)
Hideo Joho (Co-chair, University of Tsukuba, Japan)
Tetsuya Sakai (Co-chair, Microsoft Research Asia, PRC)

NTCIR General Chairs:

Noriko Kando (NII, Japan)
Tsuneaki Kato (The University of Tokyo, Japan)
Douglas W. Oard (University of Maryland, USA)
Mark Sanderson (RMIT, Australia)

いいね!した人  |  コメント(0)


SIGIR 2013 will introduce a new review criterion, Reproducibility of Methods.

These are the exact questions that the reviewers will see on the new review form:

Reproducibility of Methods

Are the descriptions of the methods used detailed and accurate? Given the resources used in the paper, or (if they are unavailable) similar resources, could researchers carry out similar experiments to verify the results? What further description could the authors provide?

So when you write a paper for SIGIR 2013, make sure you explain what you did rigorously!

Tetsuya Sakai, SIGIR 2013 PC co-chair

いいね!した人  |  コメント(0)


Oct 26 WSDM notification
Oct 26 IRJ diversity special issue final notification (for major revision papers)
Oct 31 1CLICK-2 run submissions
Oct 29-Nov 02 CIKM2012[Hawaii] http://www.cikm2012.org/
Nov 07 IAS2012 notification
Nov 21 IAS2012 camera ready
Nov 30 WSDM camera ready
Nov 30 ECIR notification
Dec 10 OAIR submissions due
Dec 07 IAS2012
Dec 17-19 AIRS2012[Tianjin] http://airs2012.tju.edu.cn/
Nov-Jan 1CLICK-2 nugget match evaluation
Jan 21 SIGIR2013 full paper abstracts due

Jan 28 SIGIR2013 full papers due

Jan 31 INTENT-2 evaluation results + early draft overview
Feb 01 1CLICK-2 early draft overview
Feb 04 SIGIR2013 workshop proposals due

Feb 04 OAIR notification
Feb 06-08 WSDM2013[Rome] http://wsdm2013.org/
Feb 17 OAIR camera ready

Feb 18 SIGIR2013 posters/tutorial proposals due
Feb 28 1CLICK-2 evaluation results
Mar 01 NTCIR-10 draft papers due

Mar 11 SIGIR2013 workshop notifications
Mar 20 EVIA2013 submissions due
Mar 25-27 ECIR2013[Moscow] http://ecir2013.org/

Apr 15 SIGIR2012 acceptance notifications
Apr 24 EVIA2013 notification
May 01 NTCIR-10 camera ready
May 08 EVIA2013 camera ready
May 22-24 OAIR201[Lisbon] http://oair2013.org
Jun 18 EVIA2013 http://research.nii.ac.jp/ntcir/evia2013/
Jun 18-21 NTCIR-10 http://research.nii.ac.jp/ntcir/ntcir-10/index.html
Jul 28-Aug 01 SIGIR2013[Dublin] http://sigir2013.ie/

いいね!した人  |  コメント(0)


Papers from #weiird2012 homework http://www.nii.ac.jp/shonan/seminar020/

Anderson, L., Krathwohl, D., Airasian, P., Cruikshank, K., Mayer, R., Pintrich, P., . . . Wittrock, M. (2000). A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives, Abridged Version: Allyn & Bacon.
Azzopardi, L. (2009), Usage Based Effectiveness Measures, In Proceedings of 18th ACM CIKM, p631-640.
Azzopardi, L.:(2011). The economics in interactive information retrieval. In: Baeza-Yates, R. & al. (Eds.) Proceedings of the ACM SIGIR’11, pp. 15--24.
Baader, F. Lutz, C., Milicic, M., Sattler, U.and Wolter, F (2005) "Description Logic Based Approach to Reasoning about Web Services", WWW 2005,
Baskaya, F., Keskustalo, H. And Järvelin, K. (2012). Time Drives Interaction: Simulating Sessions in Diverse Searching Environments. Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval (SIGIR 2012) pp.105-114 [x 2]
Bates, M. J. (1989). Design of browsing and berrypicking techniques for online search interfaces. Online Review, 13, 407-424. [x 2]
Belkin,N.J., Clarke, C.L.A., Gao, N., Kamps, J., Karlgren, J. (2011) Report on the SIGIR workshop on “entertain me”: Supporting complex search tasks. SIGIR Forum, 45(2):51-59
Belkin, N.J. (2010) On the evaluation of interactive information retrieval systems. In: B. Larsen, J.W. Schneider & F. Åström (Eds.) The Janus Faced Scholar. A Festschrift in Honour of Peter Ingwersen (pp.
13-21). Copenhagen: Royal School of Library and Information Science.
Bennett, P.N. , White, R.W., Chu, W., Dumais, S.T., Bailey, P. , Borisyuk, F. and X. Cui (2012). Modeling the Impact of Short- and Long-Term Behavior on Search Personalization. In Proceedings of SIGIR ‘12. 2012.
Bloom, B. S. and Engelhart, M. D. (1956). Taxonomy of educational objectives : the classification of educational goals. Handbook I, Cognitive domain. London: Longmans.
Bookstein, A. (1982) Information Retrieval: A Sequential Learning Process, Journal of the Amercian Soceity for Information Science, 34(5):331-341.
Borlund, P. (2003) "IIR evaluation model: a framework for evaluation of interactive information retrieval systems", Information Research, Vol. 8 No. 3, April 2003
Boscarino, C. et al (2012) "Adapting Query Expansion to Search Proficiency"
Boscarino, C., de Vries, A. P., Hollink, V. and van Ossenbruggen, J .(2011) Implicit relevance feedback from a multi-step search process: a use of query-logs. Proceedings of ECIR 2011 Workshop on Information Retrieval Over Query Sessions 2011, Dublin, Ireland, 2011.
Cole,M., Liu,J., Belkin,N.J., Bierig,R., Gwizdka,J., Liu,C., Zhang,J. and Zhang, X. (2009) Usefulness as the criterion for evaluation of interactive information retrieval. In: Proceedings of the Third Human Computer Information Retrieval Workshop, Washington, DC.
Crestani, F., Ruthven, I., Sanderson, M. and van Rijsbergen, C. J.(1995) "The Troubles with Using a Logical Model of IR on a Large Collection of Documents" In: Proceedings of the Fourth Text Retrieval Conference (TREC-4), 1-3 Nov 1995, Maryland, USA.
Downey, D. Dumais,S., Liebling, D. and Horvitz, E. (2008). Understanding the relationship between searchers' queries and information goals. In Proceedings of ’08. 2008
Eickhoff, C., Dekker , P. and de Vries, A.P. Supporting Children’s Web Search in School Environments. In Proceedings of the 4th Conference on Information Interaction in Context (IIiX), Nijmegen, The Netherlands, 2012
Egusa,Y., Takaku,M., Saito, H., Terai, H., Miwa, M. and Kando, N (2010) Using a Concept Map to Evaluate Exploratory Search, Proceedings of the Third Symposium on Information Interaction in Context (IIiX 2010); p.175-184.
Fox, S. Karnawat, K., Mydland, M., Dumais, S. and White, T. (2005). Evaluating implicit measures to improve the search experience. ACM:TOIS, 23(2), 147-168.
Fuhr, N. (2008) A Probability Ranking Principle for Interactive Information Retrieval. Information Retrieval 11(3).
Fujikawa, K., Joho, H. and Nakayama, S. (2012) ”Constraint can affect human perception, behaviour, and performance of search”. In: Proceedings of the 14th International Conference on Asia-Pacific Digital Libraries (ICADL 2012), pp. 39-48, Taipei.
Gwizdka, J. (2010) Distribution of cognitive load in web search,Journal of the American Society for Information Science and Technology, Volume 61, Issue 11, pages 2167–2187,
Halpern, J. Y. (1995) "Reasoning about Knowledge: a Survey", Handbook of Logic in Aritifical Intelligence and Logic Programming, Vol. 4 (Eds. By D. Gabbay, C.J. Hogger and J.A. Robinson) pp. 1-34
Halpern, J.Y. and Tuttle, M.R. (1993) "Knowledge, Probability, and Adversaries" Journal of the ACM, Volume 40 Issue 4, Sept. 1993, pp. 917 - 960
Hollink, V., He, J. and de Varies, A.P. (2012) Explaining Query Modifications - An Alternative Interpretation of Term Addition and Removal. ECIR 2012: 1-12
Hollink, V., Tsikrika, T. and de Varies, A.P. (2011) Semantic search log analysis: A method and a study on professional image search. JASIST 62(4): 691-713 (2011)
Järvelin,K. (2009) “Interactive Relevance Feedback with Graded Relevance and Sentence Extraction: Simulated User Experiments”. Proceedings of the 18th ACM conference on Information and knowledge management (CIKM 2009) pp. 2053-2056
Järvelin, K., Price, S. L., Delcambre, L.M.L and Nielsen, M. L. (2008). Discounted cumulated gain based evaluation of multiple-query IR sessions. Proceedings of the 30th European Conference on Information Retrieval (ECIR '08), 4-15.
Joho, H., Villa, R, and Jose J. M. (2007) ”Interaction Pool: Towards a user-centred test collection”. In: Proceedings of the Workshop on Web Information Seeking and Interaction, SIGIR 2007, Amsterdam, Netherlands: ACM.
Jones, R. and Klinkner, K. Beyond the Session Timeout: Automatic Hierarchical Segmentation of Search Topics in Query Logs, CIKM 2008.
Kammerer, Y., Nairn, R., Pirolli, P. and Chi, E. H. (2009). Signpost from the masses: learning effects in an exploratory social tag search browser. Paper presented at the Proceedings of the 27th international conference on Human factors in computing systems (CHI'09), Boston, MA, USA.
Kanoulas, E., Carterette,B., Hall,M., Clough, P. and Sanderson, M. (2011) Overview of the TREC 2011 Session Track. In Proceedings of TREC ‘11. 2011. [x 2]
Kelly, D. (2009) "Methods for evaluating Interactive Information Retreival Systems with Users", oundations and Trends in Information Retrieval, Vol. 3, Nos. 1–2 (2009) 1–224
Kelly, D., Dumais, S., and Perderson, J. O. (2009) Evaluation Challenges and Directions for Information-Seeking Support Systems, In Computer, IEEE, p44-50.
Kooi,B.P. (2003) "Probabilistic Dynamic Epistemic Logic", Journal of Logic, Language and Information 12: 381-408.
Kotov, A.,Paul, B.N., Ryen W., Dumais, S. and Teevan, J. (2011) Modeling and Analysis of Cross-Session Search Tasks. In Proceedings of SIGIR ‘11. 2011. [x 2]
Kumpulainen, S. and Järvelin, K. (2010). Information Interaction in Molecular Medicine: Integrated Use of Multiple Channels. In: Belkin, N. & al. (Eds.), Proc. of the IIiX 2010, pp. 95--104.
Lindley, S., Meek, S., Sellen, A. and Harper, R. (2012) It’s Simply Integral to What I Do: Enquiries into how the Web is Weaved into Everyday Life, WWW 2012.
Miwa, M., Egusa,Y., Saito, H.,Takaku,M., Terai, H. and Kando, N (2011) A method to capture information encountering embedded in exploratory Web searches, Information Research; vol.16; no.3; 87.
Robertson,S.E. and Hancock-Beaulieu, M. (1992) On the Evaluation of IR Systems, Information Processing and Management 28(4): 457-466 (1992).
Saito, H., Egusa,Y., Takaku,M., Miwa, M. and Kando, N (2012) Using Concept Map to Evaluate Learning by Searching, In Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci2012)
Saito, H., Takaku,M., Egusa,Y., Terai, H., Miwa, M., Kando, N (2010) Connecting Qualitative and Quantitative Analysis of Web Search Process: Analysis Using Search Units. In Proceedings of Asian Information Retrieval Society 2010 (AAIRS2010): pp. 173-182 (LNCS 6458)
Sakai,T., Kato,M.P. and Song, Y. –I. (2011) Click the search button and be happy: Evaluating direct and immediate information access. Proceedings of the 20th ACM international conference on Information and knowledge management (CIKM2011) pp. 621-630
Schuth, A. and Marx, M. (2011) Evaluation Methods for Rankings of Facetvalues for Faceted Search. Multilingual and Multimodal Information Access Evaluation - Second International Conference of the Cross-Language Evaluation Forum, CLEF 2011: 131-136, 2011.
Smucker, M.D.; Clarke, C.L.A. (2012) Time-based calibration of effectiveness measures. In Proc SIGIR 2012, pp. 95-104. [x 4]
Tague-Sutcliffe, J. (1992) Measuring the Informativeness of a Retrieval Process, In the Proceedings of the 15th ACM SIGIR. p23-36.
ten Cate, B. and Shan, C.C. (2002) "Question Answering: from Partitions to Prolog", Automated Reasoning with Analytic Tableaux and Related Methods (LNCS 2381), pp 251-265
Terai, H., Saito, H., Takaku,M., Egusa,Y., Miwa, M. and Kando, N (2008) Differences between Informational and Transactional Tasks in Information Seeking on the Web, Proceedings of the Second Symposium on Information Interaction in Context (IIiX 2008); pp.152-159
Tran, V.T. and Fuhr, N. (2012) Using Eye-Tracking with Dynamic Areas of Interest for Analyzing Interactive Information Retrieval. In Proc SIGIR 2012, pp. 1165-1166.
Vakkari, P. (2010) Exploratory searching as conceptual exploration. Proceedings of the Fourth Human
Computer Information Retrieval Workshop, New Brunswick, NJ, 24-27. [x 2]
Wildemuth, B. (2004) The effects of domain knowledge on search tactic formulation. Journal of the American Society for Information Science and Technology, Volume 55, Issue 3, pages 246–258
Wilson, M. J. and Wilson, M. L. (2012) A Comparison of Techniques for Measuring Sensemaking and Learning within Participant-Generated Summaries. In: Journal of the American Society for Information Science and Technology, (accepted).
Yang, Y. and Lad, A. (2009) “Modeling Expected Utility of Multi-session Information Distillation”. In Proceedings of ICTIR 2009 (LNCS 5766) pp. 164-175.

いいね!した人  |  コメント(0)






  • 総合
  • 新登場
  • 急上昇