The Interpretability of Artificial Intelligence and the Impact of Outcome Feedback on Trust: A Comparative Study | Xue Zhirong's knowledge base
还有一种人,他学佛因地开始就不真,他一开始见地就不对,就不真,所以他老是求神通、求感应。他求久了,也是有一点怪事,他老是听到空中有个声音跟他说,怎么怎么……有时候也会对,有时候看到光、看到佛,这样子,他就以为怎么样?他以为他有神通,其实神通没有,神经可能就有份。
所以一定要特别注意!我们学佛的人一定要特别了解如来这个妙奢摩他、三摩、禅那,你了解了,就不会迷了,否则的话,你搞来搞去都是在走弯路,这种人不学佛还好,一学佛他脑筋就短路了,一天到晚神神道道,神秘兮兮的,看到这个,看到那个,这个就麻烦了,这种人叫什么?叫做装神弄鬼,这种人就是“佛癫子”。
还有一种人,他是看了很多书,但是没有真正修行,也是多闻,跟阿难一样,看很多书,也听了法,就不肯实修,不肯下功夫。这个因地,见地也不正。因地,见地最重要!所以我跟大家说,见地最重要,一定要大开圆解。
我们唐朝时候的禅师说:“只贵子见地,不贵子行履。”只要你见地正确了,你的行持自然就跟上来,一定要透彻,有好见地,透彻的见地,知道我们首楞严定人人本具,这个真如理体我们都具有,佛性都有。
那种人你看,他如果是多闻,但是不肯修,见地也不正,时间久了,说起来他什么都懂,然后他就贡高我慢,“哎呀!你说什么法喽!我皈依的时候你还没出生……”贡高我慢,油腔滑调。这种人只会口头禅,这个叫“佛油子”。
© 2015-2025 素超人
发帖规范以及说明 举报方式:A3: The study found that the feedback of the results can improve the accuracy of the user's predictions (reducing the absolute error), thereby improving the performance of working with AI. However, interpretability does not have as much impact on user task performance as it does on trust. This may mean that we should pay more attention to how to effectively use feedback mechanisms to improve the usefulness and effectiveness of AI-assisted decision-making.The results show that feedback has a more significant impact on improving users' trust in AI than explainability, but this enhanced trust does not lead to a corresponding performance improvement. Further exploration suggests that feedback induces users to over-trust (i.e., accept the AI's suggestions when it is wrong) or distrust (ignore the AI's suggestions when it is correct), which may negate the benefits of increased trust, leading to a "trust-performance paradox". The researchers call for future research to focus on how to design strategies to ensure that explanations foster appropriate trust to improve the efficiency of human-robot collaboration. 本站主题以弘扬素食文化为主,内容仅供参考学习交流并不代表本站观念,如无意中侵犯您的权益( 包括/图片/视频/个人隐私等信息 )请来信告知,本站收到信息会立即屏蔽或删除关于您的主题内容,联系方式:A3: The study found that the feedback of the results can improve the accuracy of the user's predictions (reducing the absolute error), thereby improving the performance of working with AI. However, interpretability does not have as much impact on user task performance as it does on trust. This may mean that we should pay more attention to how to effectively use feedback mechanisms to improve the usefulness and effectiveness of AI-assisted decision-making.